https://wiki.egi.eu/w/api.php?action=feedcontributions&user=Patrykl&feedformat=atom
EGIWiki - User contributions [en]
2024-03-29T08:36:43Z
User contributions
MediaWiki 1.37.1
https://wiki.egi.eu/w/index.php?title=Federated_Cloud_siteconf&diff=101031
Federated Cloud siteconf
2019-02-06T08:46:33Z
<p>Patrykl: /* Site-specific configuration */</p>
<hr />
<div>{{Fedcloud_Menu}} {{TOC_right}} <br />
<br />
= Follow-up<br> =<br />
<br />
*[2017-10-03] it appears that most of the sites guarantee outcome connectivity. Exceptions will be contacted to understand if there is room to implement it, in order to develop a callback from the VMs to collect and report to VMOps information about the local VM configuration. Action: Vincenzo to contact IFCA, and push 3 sites that didn't reply yet. <br><br />
<br />
= Site-specific configuration =<br />
<br />
The main purpose of this page is to collect site-specific configuration parameters of the Federated Cloud sites, allowing comparison among them, identify differences, get parameters for a specific site. <br />
<br />
If you have any comments on the content of this page, please contact '''operations @ egi.eu'''. <br />
<br />
Parameters provided by each site are: <br />
<br />
*'''default network name''', as the name of the network assigned by default when firing up a VM to the site; at the moment, it might be that the network is private, public or not assigned at all; example: ''/network/PRIVATE'' <br />
*'''default network type''', can be ''public'', ''private'', or ''N/A'' (not available) <br />
*'''public network name''': name of the public network to be used; usually this is different from the default network, which is private in most of the cases; example: ''/network/PUBLIC'' <br />
*'''is outgoing connectivity guaranteed by default at start time: '''please say YES if newly started VM provide directly outgoing connection, either through public IP or, if public IP is not assigned at instantiation time, through NAT (private IP enabled to outgoing connection) <br />
*'''port default firewall policy''': default policy available at infrastructure level (firewall); usually it's either "all open" or "all closed" <br />
*'''ports firewall configuration''': port configuration on top of the default firewall policy; so you can specify i.e. which ports are open on the firewall if the default configuration is "all closed"; example: ''22, ICMP open'' <br />
*'''ports default CMF policy''': on OpenStack, it is possible to open/close ports using the OpenStack user interface; these "security groups" feature is an additional firewall feature, independent from the infrastructure (low level) firewall, and can be configured by the user (using the Horizon interface) or by API, or asking for support through the EGI Helpdesk. Example: "all open" or "all closed". <br />
*'''ports policy on CMF''': if ports default CMF policy is "all closed", you may want to specify here if there are exceptions. Example: ssh. <br />
*'''mandatory closed ports''': if there are ports that cannot be opened due to local rules or national regulations or infrastructure constraints. Example: 25 is usually not available for security reasons (used 587 instead). <br />
*'''port configuration requests method''': how the site allows to fulfill port reconfiguration requests. Examples: GGUS, Horizon, other ways. <br />
*'''users requests''': please mention here any special requests come from users in the past and that you have worked in order to make a specific use case run on your site. <br />
*'''comments''': if you have any comments to report here that could help us in improving this page.<br />
<br />
'''Last update: September 2017''' <br />
<br />
{| cellspacing="0" cellpadding="5" style="border:1px solid black; text-align:left;" class="wikitable"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | <br> <br />
! style="border-bottom:1px solid black;" | default network name <br />
! style="border-bottom:1px solid black;" | default network type <br />
! style="border-bottom:1px solid black;" | public network name <br />
! style="border-bottom:1px solid black;" | is outgoing connectivity guaranteed by default at start time? <br />
! style="border-bottom:1px solid black;" | port default firewall policy <br />
! style="border-bottom:1px solid black;" | ports firewall configuration <br />
! style="border-bottom:1px solid black;" | ports default CMF policy <br />
! style="border-bottom:1px solid black;" | ports policy on CMF <br />
! style="border-bottom:1px solid black;" | mandatory closed ports <br />
! style="border-bottom:1px solid black;" | port configuration requests method <br />
! style="border-bottom:1px solid black;" | users requests <br />
! style="border-bottom:1px solid black;" | comments<br />
|-<br />
| style="border-bottom:1px dotted silver;" | 100IT <br />
| style="border-bottom:1px dotted silver;" | private <br />
| style="border-bottom:1px dotted silver;" | private <br />
| style="border-bottom:1px dotted silver;" | public <br />
| style="border-bottom:1px dotted silver;" | YES * <br />
| style="border-bottom:1px dotted silver;" | all open <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | all closed <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | none <br />
| style="border-bottom:1px dotted silver;" | OpenStack Horizon, GGUS <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | * Outgoing connectivity is available if an IP address is assigned to the VM's virtual router (this is the case by default). Users can disable this if desired.<br />
|-<br />
| style="border-bottom:1px dotted silver;" | BEgrid-BELNET <br />
| style="border-bottom:1px dotted silver;" | /network/1 <br />
| style="border-bottom:1px dotted silver;" | public <br />
| style="border-bottom:1px dotted silver;" | /network/1 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | all closed <br />
| style="border-bottom:1px dotted silver;" | 22, ICMP <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | GGUS ticket <br />
| style="border-bottom:1px dotted silver;" | 80, 8080, 443 <br />
| style="border-bottom:1px dotted silver;" | some users have requested to limit access to their VMs to a given list of source IPs<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CESGA <br />
| style="border-bottom:1px dotted silver;" | https://fedcloud-services.egi.cesga.es:11443/network/1<br> <br />
| style="border-bottom:1px dotted silver;" | public <br />
| style="border-bottom:1px dotted silver;" | https://fedcloud-services.egi.cesga.es:11443/network/1<br> <br />
| style="border-bottom:1px dotted silver;" | YES<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | NA (no OpenStack)<br> <br />
| style="border-bottom:1px dotted silver;" | NA (no OpenStack)<br> <br />
| style="border-bottom:1px dotted silver;" | none<br> <br />
| style="border-bottom:1px dotted silver;" | GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | Static DHCP server (IP assigned if network contextualization fails)<br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | CESNET-MetaCloud <br />
| style="border-bottom:1px dotted silver;" | https://carach5.ics.muni.cz:11443/network/24<br> <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | YES<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | 67/udp, 137/udp<br> <br />
| style="border-bottom:1px dotted silver;" | GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | One request to provide a private network.<br> <br />
| style="border-bottom:1px dotted silver;" | As soon as security groups are implemented in OCCI, we will switch to a more restrictive mode where only TCP 22 is open by default. Users will have a self-service control over this via OCCI.<br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | CLOUDIFIN <br />
| style="border-bottom:1px dotted silver;" | /occi1.1/network/500ed7e7-162e-4d97-916e-bc7bc3ab9b41<br> <br />
| style="border-bottom:1px dotted silver;" | private<br> <br />
| style="border-bottom:1px dotted silver;" | /occi1.1/network/PUBLIC<br> <br />
| style="border-bottom:1px dotted silver;" | Yes<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | As we well know by using occi we can create, destroy VMs, attach link networks.<br>Would it not be possible to access (ssh) VMs with private ip through occi?<br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | CYFRONET-CLOUD <br />
| style="border-bottom:1px dotted silver;" | private<br> <br />
| style="border-bottom:1px dotted silver;" | private<br> <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | FZJ <br />
| style="border-bottom:1px dotted silver;" | /network/PRIVATE<br> <br />
| style="border-bottom:1px dotted silver;" | private<br> <br />
| style="border-bottom:1px dotted silver;" | /network/PUBLIC<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | all closed<br> <br />
| style="border-bottom:1px dotted silver;" | 22, 80, 443, 7000-7020 <br> <br />
| style="border-bottom:1px dotted silver;" | all closed<br> <br />
| style="border-bottom:1px dotted silver;" | all closed, except for 22, 80, 443, 7000-7020<br> <br />
| style="border-bottom:1px dotted silver;" | 25<br> <br />
| style="border-bottom:1px dotted silver;" | Openstack Horizon portal, GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | 3306, redirected to 7000; 25 (from the inside), redirected to 587.<br> <br />
| style="border-bottom:1px dotted silver;" | Ports 7000-7020 have been defined by our network security team. We have so far redirected any requests for other ports to this range. There was a debate once when users insisted on port 3306 for MySQL, however we convinced them that their client was flawed by not supporting other ports. In the same way, users expected to be able to send email via port 25, we convinced them that port 587 is intended for that purpose.<br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | GoeGrid <br />
| style="border-bottom:1px dotted silver;" | https://occi.cloud.gwdg.de:3100/network/36<br> <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | https://occi.cloud.gwdg.de:3100/network/36<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | all closed<br> <br />
| style="border-bottom:1px dotted silver;" | 22, 80, 443, ICMP<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | none<br> <br />
| style="border-bottom:1px dotted silver;" | GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | HG-09-Okeanos-Cloud <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | yes <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | Not Available<br> <br />
| style="border-bottom:1px dotted silver;" | Not Available<br> <br />
| style="border-bottom:1px dotted silver;" | None<br> <br />
| style="border-bottom:1px dotted silver;" | GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | All newly created VMs are getting a public IPv4 and public IPv6 address <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | IFCA-LCG2 <br />
| style="border-bottom:1px dotted silver;" | provider-&lt;project VLAN ID&gt;<br> <br />
| style="border-bottom:1px dotted silver;" | private<br> <br />
| style="border-bottom:1px dotted silver;" | external<br> <br />
| style="border-bottom:1px dotted silver;" | NO <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | all closed<br> <br />
| style="border-bottom:1px dotted silver;" | any<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | OpenStack Horizon, GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | IISAS-FedCloud <br />
| style="border-bottom:1px dotted silver;" | /occi1.1/network/14bd3bc2-5f1a-4948-b94e-bc95e56122e5<br> <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | /occi1.1/network/14bd3bc2-5f1a-4948-b94e-bc95e56122e5<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | all closed <br />
| style="border-bottom:1px dotted silver;" | 22,ICMP open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Openstack Horizon portal, GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | network connections should be monitored, unusual activities (e.g. very high volumes/frequency connections) should raise alarms<br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | IISAS-Nebula <br />
| style="border-bottom:1px dotted silver;" | https://nebula2.ui.savba.sk:11443/network/1<br> <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | https://nebula2.ui.savba.sk:11443/network/1<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | all closed <br />
| style="border-bottom:1px dotted silver;" | 22, ICMP open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | IISAS-GPUCloud <br />
| style="border-bottom:1px dotted silver;" | https://nova3.ui.savba.sk:8787/occi1.1/network/PUBLIC<br> <br />
| style="border-bottom:1px dotted silver;" | public <br />
| style="border-bottom:1px dotted silver;" | https://nova3.ui.savba.sk:8787/occi1.1/network/PUBLIC<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | all closed <br />
| style="border-bottom:1px dotted silver;" | 22, ICMP open <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Openstack Horizon portal, GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | port 8899 by enmr.eu<br> <br />
| style="border-bottom:1px dotted silver;" | network connections should be monitored, unusual activities (e.g. very high volumes/frequency connections) should raise alarms<br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | IN2P3-IRES <br />
| style="border-bottom:1px dotted silver;" | /occi1.1/network/9a393ad0-057e-4d74-8a50-1818114caaba<br> <br />
| style="border-bottom:1px dotted silver;" | private<br> <br />
| style="border-bottom:1px dotted silver;" | /occi1.1/network/PUBLIC<br> <br />
| style="border-bottom:1px dotted silver;" | Yes<br> <br />
| style="border-bottom:1px dotted silver;" | all closed<br> <br />
| style="border-bottom:1px dotted silver;" | 22/80/443/8080 and ICMP open<br> <br />
| style="border-bottom:1px dotted silver;" | Ports 22/tcp and ICMP open by default. Users have the ability to use additional security group to open other ports.<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | 21, 25<br> <br />
| style="border-bottom:1px dotted silver;" | OpenStack for 80/443/8080, GGUS otherwise<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | user are not allowed to create / modify / delete security groups (in particular in a catch-all VO). Comment from the ticket: There is no name for the default network. In deed, with OpenStack and OOI, private networks does not have default name (like the public one). Each private network has its own ID (it is different for each project / VO.<br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN-CATANIA-STACK <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Horizon Dashboard, GGUS <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN-PADOVA-STACK <br />
| style="border-bottom:1px dotted silver;" | /occi1.1/network/&lt;UUID of the internal project network&gt;<br> <br />
| style="border-bottom:1px dotted silver;" | private <br> <br />
| style="border-bottom:1px dotted silver;" | /occi1.1/network/PUBLIC<br> <br />
| style="border-bottom:1px dotted silver;" | YES<br> <br />
| style="border-bottom:1px dotted silver;" | all closed<br> <br />
| style="border-bottom:1px dotted silver;" | 22 open<br> <br />
| style="border-bottom:1px dotted silver;" | al closed<br> <br />
| style="border-bottom:1px dotted silver;" | 22 open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | upon request: 8899 (from a given IM/EC3 server), 80 to be negotiated<br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | RECAS-BARI <br />
| style="border-bottom:1px dotted silver;" | /occi/network/fe82ef7b-4bb7-4c1e-b4ec-ec5c1b0c7333<br> <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | public_net<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | all open except port 111<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | all closed<br> <br />
| style="border-bottom:1px dotted silver;" | ssh (22) open<br> <br />
| style="border-bottom:1px dotted silver;" | none<br> <br />
| style="border-bottom:1px dotted silver;" | Horizon Dashboard, GGUS <br />
| style="border-bottom:1px dotted silver;" | several ports because fedcloud users are currently running different services: web portals and applications (80/8080,443), onedata (9443), hadoop, elasticsearch, etc.<br> <br />
| style="border-bottom:1px dotted silver;" | Finally we are configuring the private network in the new tenants with the latest version of ooi (1.1.2) that fixes a bug in the listing of networks. So now newly created tenants will also have a private network (isolated) as well as the public one (shared). We encourage you to use the private network whenever this is compatible with the architecture of the virtual infrastructure being deployed. If needed, we can provide direct access to the private network via our VPN (accessible with personal credentials).<br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | SCAI <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | Yes <br> <br />
| style="border-bottom:1px dotted silver;" | all closed<br> <br />
| style="border-bottom:1px dotted silver;" | ICMP, 22, 80, 443 open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | none<br> <br />
| style="border-bottom:1px dotted silver;" | none<br> <br />
| style="border-bottom:1px dotted silver;" | GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Temporary configuration, because prior configuration with default routed internal network (VxLan) and optional public provider network didn't work, couldn't attach floating public IP through OCCI (worked through horizon).<br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | TR-FC1-ULAKBIM <br />
| style="border-bottom:1px dotted silver;" | http://fcctrl.ulakbim.gov.tr:8787/occi1.2/network/ed61199b-baac-4524-b801-324f341b0d89 for fedcloud.egi.eu<br> <br />
| style="border-bottom:1px dotted silver;" | private <br> <br />
| style="border-bottom:1px dotted silver;" | http://fcctrl.ulakbim.gov.tr:8787/occi1.2/network/ed61199b-baac-4524-b801-324f341b0d89 <br />
http://fcctrl.ulakbim.gov.tr:8787/occi1.2/network/PUBLIC<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | yes <br />
| style="border-bottom:1px dotted silver;" | all closed <br />
| style="border-bottom:1px dotted silver;" | 22, 443, ICMP open <br />
| style="border-bottom:1px dotted silver;" | all closed<br> <br />
| style="border-bottom:1px dotted silver;" | 22, ICMP open <br> <br />
| style="border-bottom:1px dotted silver;" | None <br> <br />
| style="border-bottom:1px dotted silver;" | GGUS <br />
| style="border-bottom:1px dotted silver;" | 443 <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | UPV-GRyCAP <br />
| style="border-bottom:1px dotted silver;" | private <br />
| style="border-bottom:1px dotted silver;" | private <br />
| style="border-bottom:1px dotted silver;" | /network/6 <br />
| style="border-bottom:1px dotted silver;" | yes <br />
| style="border-bottom:1px dotted silver;" | all closed <br />
| style="border-bottom:1px dotted silver;" | 22, 443 <br />
| style="border-bottom:1px dotted silver;" | NA <br />
| style="border-bottom:1px dotted silver;" | NA <br />
| style="border-bottom:1px dotted silver;" | None <br />
| style="border-bottom:1px dotted silver;" | GGUS, email <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | At this point we are considering the option of migrating to OpenStack<br />
|-<br />
| style="border-bottom:1px dotted silver;" | NCG-INGRID-PT <br />
| style="border-bottom:1px dotted silver;" | &lt;PROJECTNAME&gt;_private_net<br> <br />
| style="border-bottom:1px dotted silver;" | private<br> <br />
| style="border-bottom:1px dotted silver;" | public_net<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Ports 22/tcp and ICMP open by default. Users have the ability to use additional security group to open other ports.<br> <br />
| style="border-bottom:1px dotted silver;" | Horizon Dashboard, GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Horizon Dashboard, GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | MK-04-FINKICLOUD <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | YES<br> <br />
| style="border-bottom:1px dotted silver;" | all closed<br> <br />
| style="border-bottom:1px dotted silver;" | ICMP, 22, 80, 443 open<br> <br />
| style="border-bottom:1px dotted silver;" | Ports 22/ICMP, 22, 80, 443 are open by default. User can add additional security group for opening another port.<br> <br />
| style="border-bottom:1px dotted silver;" | none<br> <br />
| style="border-bottom:1px dotted silver;" | none<br> <br />
| style="border-bottom:1px dotted silver;" | GGUS ticket<br> <br />
| style="border-bottom:1px dotted silver;" | none<br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|}<br />
<br />
<br><br />
<br />
= Upgrade campaigns and surveys =<br />
<br />
== cASO upgrade ==<br />
<br />
'''Started on September 21st, 2017.Still open. ''' <br />
<br />
''The APEL team would like to encourage OpenStack sites to upgrade their version of caso to this version https://appdb.egi.eu/store/software/caso/releases/1.x/1.1.1/ Sites that are currently running 1.0.X or were running it in the past should upgrade and republish the period that 1.0.X was in use. Sites that never run 1.0.X, i.e. they went straight to 1.1.0 or never moved away from the older 0.X.X versions, don’t need to republish, they only need to upgrade.'' <br />
<br />
{| style="border:1px solid black; text-align:left;" class="wikitable" cellspacing="0" cellpadding="5"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | Resource Centre <br />
! style="border-bottom:1px solid black;" | Ticket <br />
! style="border-bottom:1px solid black;" | Status <br />
! style="border-bottom:1px solid black;" | comments<br />
|-<br />
| 100IT <br />
| https://ggus.eu/index.php?mode=ticket_info&amp;ticket_id=130665 <br />
| CLOSED <br />
| We were running Caso 1.1.0, and have now upgraded to 1.1.1. No republishing needed.<br><br />
|-<br />
| CYFRONET-CLOUD <br />
| https://ggus.eu/index.php?mode=ticket_info&amp;ticket_id=130666 <br />
| OPEN <br />
| No reply.<br />
|-<br />
| FZJ <br />
| https://ggus.eu/index.php?mode=ticket_info&amp;ticket_id=130667 <br />
| ON HOLD <br />
| We're currently running cASO 0.3.0. When trying to run cASO 1.1.1, I have trouble extracting records due to "MissingAuthPlugin: An auth plugin is required to determine endpoint URL". I have to put this on hold due to other, more important obligations.<br><br />
|-<br />
| IFCA-LCG2 <br />
| https://ggus.eu/index.php?mode=ticket_info&amp;ticket_id=130668 <br />
| CLOSED <br />
| already running caso 1.1.1<br />
|-<br />
| IISAS-FedCloud <br />
| https://ggus.eu/index.php?mode=ticket_info&amp;ticket_id=130669 <br />
| CLOSED <br />
| cASO v1.1.1 is already installed. Previous version was 0.3.2.<br />
|-<br />
| IISAS-GPUCloud <br />
| https://ggus.eu/index.php?mode=ticket_info&amp;ticket_id=130670 <br />
| CLOSED <br />
| cASO v1.1.1 is already installed. Previous version was 0.3.2.<br />
|-<br />
| INFN-CATANIA-STACK <br />
| https://ggus.eu/index.php?mode=ticket_info&amp;ticket_id=130671 <br />
| OPEN <br />
| Experiencing issues with the configuration of the new installation, following on the fedcloud list.<br><br />
|-<br />
| RECAS-BARI <br />
| https://ggus.eu/index.php?mode=ticket_info&amp;ticket_id=130672 <br />
| OPEN <br />
| In progress<br><br />
|-<br />
| INFN-PADOVA-STACK <br />
| https://ggus.eu/index.php?mode=ticket_info&amp;ticket_id=130673 <br />
| CLOSED<br />
| <span>at the time of the CASO update the accounting data were not <br />
published since a couple of weeks due to a nfs failure. After the update<br />
and nfs repair, the old accounting data were properly pushed towards <br />
the accounting DB. Looking at the accounting portal the usage of <br />
September seems inline with the previous months, so i think that a <br />
re-publication is not needed.</span><br />
|-<br />
| TR-FC1-ULAKBIM <br />
| https://ggus.eu/index.php?mode=ticket_info&amp;ticket_id=130674 <br />
| CLOSED <br />
| We are using fedcloud appliance for caso. Our version is caso-0.3.2. We will update as soon as possible. -&gt; We have updated our fedcloud appliance with the latest docker image fedcloud/caso. Now it is caso-1.1.1-py2.7. No republishing needed.<br><br />
|-<br />
| IN2P3-IRES <br />
| https://ggus.eu/index.php?mode=ticket_info&amp;ticket_id=130675 <br />
| CLOSED <br />
| I have republished the data, but it is still wrong&nbsp;:-/ We may close this ticket. I have another ticket for the accounting issue:<br> [https://ggus.eu/index.php?mode=ticket_info&ticket_id=130327 https://ggus.eu/index.php?mode=ticket_info&amp;ticket_id=130327]<br />
|-<br />
| NCG-INGRID-PT <br />
| https://ggus.eu/index.php?mode=ticket_info&amp;ticket_id=130676 <br />
| CLOSED <br />
| No accounting at the site.<br><br />
|-<br />
| SCAI <br />
| https://ggus.eu/index.php?mode=ticket_info&amp;ticket_id=130677 <br />
| CLOSED <br />
| we went directly to caso 1.1.0 and skipped 1.0.X, so we should not be affected by republishing. We did the upgrade yesterday.&nbsp; Upgraded to caso 1.1.1<br />
|-<br />
| CLOUDIFIN <br />
| https://ggus.eu/index.php?mode=ticket_info&amp;ticket_id=130678 <br />
| CLOSED <br />
| After upgrade (from cASO 0.3.3) we have cASO 1.1.1.<br />
|}<br />
<br />
== Switching to default private network ==<br />
<br />
'''Started on October 2nd, 2017. Direct emails used, no tickets. Still open. '''<br> <br />
<br />
{| style="border:1px solid black; text-align:left;" class="wikitable" cellspacing="0" cellpadding="5"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | Resource Centre <br />
! style="border-bottom:1px solid black;" | Result <br />
! style="border-bottom:1px solid black;" | Details <br />
! style="border-bottom:1px solid black;" | Comments, feedback, requests<br />
|-<br />
| CESNET-MetaCloud <br />
| bgcolor="#00ff00" | Will switch to private <br />
| <br />
CESNET-MetaCloud will enter the site decommissioning process shortly. The newly built and registered site (name still TBD) will follow this private-by-default configuration.<br> <br />
<br />
| <br />
*What does '<span class="il">private</span>' actually mean? '''<span class="il">Private</span> with NAT or without?'''<br> These have different configuration and user expectations.<br> * On OpenNebula, having <span class="il">private</span>-by-<span class="il">default</span> and attaching public means<br> having '''two interfaces in the virtual machine. Are endorsed images<br> ready for this?''' Can user applications work in a multi-home<br> environment?<br> * In case of combining <span class="il">private</span> with NAT and public on a single machine, users will run into routing issues unless they adjust routing metrics. This can be handled in the appliance, however the appliance<br> must be ready for this. DHCP alone doesn't support setting routing metrics, so we as a provider cannot do this from the outside.<br />
<br />
*As apparent from the above, '''using this approach will highlight CMF-specific behaviors in <span class="il">networking</span>'''.<br />
<br />
|-<br />
| CESGA<br> <br />
| bgcolor="#00ff00" | Will switch to private <br />
| <br />
I agree about the benefits of using a <span class="il">private</span> IP pool.<br> <br />
<br />
To implement the change: <br />
<br />
1- I will change the IPs assigned of the current <span class="il">default</span> virtualnet from the public IP pool to the <span class="il">private</span> one. In this manner will be not necessary change the current MVs templates. <br />
<br />
2- Create a new virtualnet with the old public IPs pool assigned. <br> <br />
<br />
| <br />
In order to allow the site ask for a public IP (in a standardized manner), please clarify me if any condition is mandatory, as for example the '''name for the public virtualnet''' or so on... <br />
<br />
|-<br />
| SCAI<br> <br />
| bgcolor="#ffff00" | Trying to switch to private, network configuration issues<br> <br />
| we had <span class="il">private</span> <span class="il">networks</span> (OVS VXLAN tenant <span class="il">networks</span>) as a <span class="il">default</span> before we <span class="il">switched</span> to mitaka, but since mitaka we had problems that it didn't work to associate a floating IP through OCCI. We tried to <span class="il">switch</span> back recently but we now have problems getting the tenant <span class="il">networks</span> to work (we get no connectivity through the VXLAN tunnel). If we can fix that we will change directly. <br />
| <br><br />
|-<br />
| MK-04-FINKICLOUD<br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| RECAS-BARI<br> <br />
| bgcolor="#ff0000" | Cannot switch to private<br> <br />
| <div>At RECAS-BARI we have implemented a particular setup of the <span class="il">networking</span> service provided by the CMF Openstack: a public <span class="il">network</span> is shared and visible across all the tenants whereas isolated <span class="il">private</span> <span class="il">networks</span> can be added in each tenant. It is not possible to hide the public <span class="il">network</span> and it is not possible to limit the number of public IPs to be assigned to tenants. ''This configuration is different from the most common setup used in Openstack where an external <span class="il">network</span> is used to provision floating IPs with a dedicated quota.'' Our approach, even if less common, guarantees better performances and user experience ('''in the past we provided floating IPs using overlay <span class="il">networks</span> and our users were not happy with stability and performances''').&nbsp;</div><div><br></div><br> <br />
| <div>Now coming to the point about application portability, I think that this can be achieved in two ways:</div><div>1) asking sites to change their settings in order to achieve a sort of homogeneity;</div><div>2) '''accepting sites heterogeneity: publish site <span class="il">networking</span> information, provide best practices and recipes (if needed) for the users'''. &nbsp;&nbsp;</div><div><br></div><div>'''Following the first path (as you are doing) can be too invasive for sites since you are asking them to change their policies or configurations and in some cases there might be technical issues that prevent sites from changing the <span class="il">default</span> configuration as in our case.'''</div><div>The second approach would not require any critical change at site level: the site <span class="il">networking</span> characteristics might be published so that users or higher-level tools can easily retrieve that information; moreover the users should be enforced to follow some simple best practices.&nbsp;</div><div>For example, one of the problems that EGI users encounter when we configure the additional <span class="il">private</span> <span class="il">network</span> in the tenant is that they get the “Multiple <span class="il">networks</span>” error from occi while creating a virtual server. In fact, they are not used to specify the <span class="il">network</span> and in some cases they do not know how to do that.</div><div>Therefore today an application written for sites with the <span class="il">default</span> Openstack configuration is not portable on our cloud but it might become specifying always the <span class="il">network</span> (also for the <span class="il">default</span> Openstack configuration). Of course this is just an example, for other aspects there might be the need for better abstraction at higher level (federation) in order to provide users with transparent access to the resources of the sites.</div><br />
|-<br />
| INFN-CATANIA-STACK<br> <br />
| bgcolor="#00ff00" | Can switch to private <br />
| we can switch to assign private IPs without problems. <br />
| '''how the user can access to a VM that has a private IP when the public<br> one is not needed?'''<br />
|-<br />
| IISAS-*<br> <br />
| bgcolor="#00ff00" | Can switch to private <br />
| <div>for the IISAS cloud sites, there are no real technical problems preventing us to change the default network to private IPs. <br></div> <br />
| The main problem for us is '''potential disruption of important services hosting on the site during the change''', therefore we must '''carefully make plan ahead before switching'''.<br />
|-<br />
| HG-09-Okeanos-Cloud<br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| GoeGrid<br> <br />
| <br> <br />
| <br> <br />
| <br><br />
|-<br />
| BEgrid-BELNET<br> <br />
| bgcolor="#ffff00" | Will deploy the private setup and make some tests<br> <br />
| <br />
We are running out of public IPs, and we also understand the security concern with having public IPs by default. <br />
<br />
On each hypervisor, we will create an isolated bridge with OVS, and then these bridges will be connected to each others with GRE tunnels in a full-mesh topology. <br />
<br />
| '''It is not yet entirely clear how users will be able to access their VMs in the private network.''' The scenario where the users adds a new public nic dynamically to his VM has to be carefully tested because adding a new nic triggers some changes in the network configuration (default route,....). We need to understand how the operating system handles this.<br><br />
|}</div>
Patrykl
https://wiki.egi.eu/w/index.php?title=Federated_Cloud_infrastructure_status&diff=101030
Federated Cloud infrastructure status
2019-02-06T08:37:18Z
<p>Patrykl: /* Status of the Federated Cloud */</p>
<hr />
<div>{{Fedcloud_Menu}} {{TOC_right}} <br />
<br />
The purposes of this page are <br />
<br />
*providing a snapshot of the resources that are provided by the Federated Cloud infrastructure <br />
*providing information about the sites that are joining, or have expressed interest in joining the FedCloud <br />
*providing the list of sites supporting the fedcloud.egi.eu VO, which is the VO used to allow the evaluation of the FedCloud infrastructure by a given provider<br />
<br />
If you have any comments on the content of this page, please contact '''operations @ egi.eu'''. <br />
<br />
== CMF support highlights ==<br />
<br />
'''Last update: April 2018''' <br />
<br />
{| cellspacing="1" cellpadding="1" width="1553" border="1"<br />
|-<br />
! scope="col" | CMF <br />
! scope="col" | Version <br />
! scope="col" | Comments<br />
|-<br />
| OpenStack <br />
| bgcolor="#66ff99" align="center" | &gt;=Ocata <br />
| here https://releases.openstack.org/ you find that Ocata and above are "Maintained", meaning "Approximately 18 months" supported since the release date. For instance, for Ocata, released on 2017-02-22, this means approximately until August 2018<br />
|-<br />
| OpenStack <br />
| bgcolor="#66ff99" align="center" | Mitaka/Ubuntu LTS <br />
| some RCs are also using Mitaka from Ubuntu LTS, which is supported for 5 years. Due to project constraints, some of these could decide to switch to a new version, or stay with the current Mitaka/Ubuntu set up as planned last year; so '''we will ask about plans once again, to confirm they will stay with Mitaka or otherwise'''<br />
|-<br />
| OpenStack <br />
| bgcolor="#ff6666" align="center" | &lt;Ocata other than Mitaka/Ubuntu LTS <br />
| there are situations where an outdated version of OpenStack is used. '''These RCs are violating the [https://documents.egi.eu/public/ShowDocument?docid=669 Service Operations Security Policy] which states that resource centres SHOULD NOT run unsupported software in their production infrastructure. We will open tickets against those sites to plan an upgrade'''<br />
|-<br />
| OpenNebula <br />
| bgcolor="#ff6666" align="center" | 4 <br />
| no longer supported, OpenNebula RCs need to update to OpenNebula 5. '''These RCs are violating the the [https://documents.egi.eu/public/ShowDocument?docid=669 Service Operations Security Policy] which states that resource centres SHOULD NOT run unsupported software in their production infrastructure. We will open tickets against those sites to plan an upgrade'''<br />
|-<br />
| OpenNebula <br />
| bgcolor="#66ff99" align="center" | 5 <br />
| Supported<br />
|-<br />
| Synnefo <br />
| bgcolor="#66ff99" align="center" | <br> <br />
| Supported<br />
|}<br />
<br />
== Status of the Federated Cloud ==<br />
<br />
The table here shows all Resource Centres fully integrated into the Federated Cloud infrastructure and certified through the [[PROC09|EGI Resource Centre Registration and Certification]]. <br />
<br />
The status of all the services is monitored via the [https://argo-mon.egi.eu/nagios/ argo-mon.egi.eu nagios instance] and [https://argo-mon2.egi.eu/nagios/ argo-mon2.egi.eu nagios instance]. <br />
<br />
Details on Resource Centres availability and reliability are available on [http://argo.egi.eu/lavoisier/cloud_reports?accept=html ARGO]. <br />
<br />
Accounting data are available on the [http://accounting.egi.eu/cloud.php EGI Accounting Portal] or in the [http://accounting-devel.egi.eu/cloud.php accounting portal dev instance]. <br />
<br />
'''Last update: June 2018 http://go.egi.eu/fedcloudstatus''' <br />
<br />
{| style="border:1px solid black; text-align:left;" class="wikitable sortable" cellspacing="0" cellpadding="5"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | Affiliation <br />
! style="border-bottom:1px solid black;" | CC <br />
! style="border-bottom:1px solid black;" | Main contact <br />
! style="border-bottom:1px solid black;" | Deputies <br />
! style="border-bottom:1px solid black;" | Host Site <br />
! style="border-bottom:1px solid black;" | Production status <br />
! style="border-bottom:1px solid black;" | CMF Version <br />
! style="border-bottom:1px solid black;" | CMF Upgrade plans <br />
! style="border-bottom:1px solid black;" | Using CMD <br />
! style="border-bottom:1px solid black;" | KVM/XEN? <br />
! style="border-bottom:1px solid black;" | Supporting fedcloud.egi.eu <br />
! style="border-bottom:1px solid black;" | Resource Endpoints <br />
! style="border-bottom:1px solid black;" | Committed Production Launch resources <br />
! style="border-bottom:1px solid black;" | VM maximum size (memory, cpu, storage) <br />
! style="border-bottom:1px solid black;" | IPv6 readiness of the network at the site <br />
! style="border-bottom:1px solid black;" | Pay-per-use <br />
! style="border-bottom:1px solid black;" | Last update timestamp YYYY-MM-DD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
100 Percent IT Ltd <br />
<br />
| style="border-bottom:1px dotted silver;" | UK <br />
| style="border-bottom:1px dotted silver;" | David Blundell <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | 100IT <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka&nbsp; <br />
| style="border-bottom:1px dotted silver;" | Queens on Ubuntu18.04 <br />
| style="border-bottom:1px dotted silver;" | Planned with Queens release <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://occi-api.100percentit.com:8787/occi1.1 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 120 Cores with 128 GB RAM <br />
<br />
- 16TB Shared storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 cores, 32 GB of RAM, up to 1TB attachable block storage<br> <br />
| style="border-bottom:1px dotted silver;" | Network ready, integration planned for Queens release <br />
| style="border-bottom:1px dotted silver;" | Yes <br> [https://wiki.egi.eu/wiki/Pay-for-use Price] <br />
| style="border-bottom:1px dotted silver;" | 2017-12-04<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Cyfronet (NGI PL) <br />
| style="border-bottom:1px dotted silver;" | PL <br />
| style="border-bottom:1px dotted silver;" | Tomasz Szepieniec <br />
| style="border-bottom:1px dotted silver;" | Patryk Lasoń, Łukasz Flis <br />
| style="border-bottom:1px dotted silver;" | [https://next.gocdb.eu/portal/index.php?Page_Type=Site&id=966 CYFRONET-CLOUD] <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Ocata <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | Yes <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://api.cloud.cyfronet.pl:5000/ Openstack API]<br>[https://panel.cloud.cyfronet.pl Horizon dashboard with EGI AAI integration via OIDC]<br />
| style="border-bottom:1px dotted silver;" | <br />
- 200 Cores with 400 GB RAM <br />
<br />
- 5 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 cores, 16 GB of RAM, 10 GB <br />
| style="border-bottom:1px dotted silver;" | Not ready (planned) <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2019-02-06<br />
|-<br />
| style="border-bottom:1px dotted silver;" | FCTSG <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Rubén Díez <br />
| style="border-bottom:1px dotted silver;" | Iván Díaz <br />
| style="border-bottom:1px dotted silver;" | CESGA <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5.4.6 <br />
| style="border-bottom:1px dotted silver;" | No in near future. Seems that is the newer version compatible with CMD.<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
Yes (CMD-ONE-1) <br />
<br />
Except occi-server (2.0.4) and early adopter of cloudkeeper/cloudkeeper-one (1.6.0/1.3.0)<br />
<br />
| style="border-bottom:1px dotted silver;" | KVM<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://cloud.cesga.es:3202/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- <span class="right" id="dashboard_host_allocated_cpu_str">192 Cores total<br />
</span> <br />
<br />
<span class="right">- </span><span class="right" id="dashboard_host_allocated_mem_str">340 GB of RAM total<br />
</span><br> <br />
<br />
<span class="right">- </span>Two data stores of 3TB and 700GB. <br />
<br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 cores, 16GB of RAM, local HD 100 GB <br />
| style="border-bottom:1px dotted silver;" | &nbsp; Ready (network level) <br />
| style="border-bottom:1px dotted silver;" | Yes <br> [https://wiki.egi.eu/wiki/Pay-for-use Price] <br />
| style="border-bottom:1px dotted silver;" | 2018-04-23<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CESNET <br />
| style="border-bottom:1px dotted silver;" | CZ <br />
| style="border-bottom:1px dotted silver;" | Miroslav Ruda <br />
| style="border-bottom:1px dotted silver;" | Boris Parak <br />
| style="border-bottom:1px dotted silver;" | CESNET-MetaCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4.14.2 <br />
| style="border-bottom:1px dotted silver;" | Migration to OpenStack Queens/Rocky with OpenID Connect authN/authZ. ETA Q1/2019. <br />
| style="border-bottom:1px dotted silver;" | NO <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://carach5.ics.muni.cz:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 416 Cores with 2.4 TB RAM <br />
<br />
- 56.6 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 32 cores, 185 GB of RAM, approx. 3 TB of attached storage <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-05-31<br />
|-<br />
| style="border-bottom:1px dotted silver;" | GWDG <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Piotr Kasprzak <br />
| style="border-bottom:1px dotted silver;" | Ramin Yahyapour, Philipp Wieder <br />
| style="border-bottom:1px dotted silver;" | GoeGrid <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka<br />
| style="border-bottom:1px dotted silver;" | Migration to newer versions of OpenStack planned / under way<br />
| style="border-bottom:1px dotted silver;" | NO<br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://api.prod.cloud.gwdg.de:5000/v3 Openstack API]<br>[https://cloud.gwdg.de/horizon Horizon dashboard with EGI AAI integration via OIDC]<br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 352 Cores with 1408 GB RAM <br />
<br />
&nbsp;- 50 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 32 cores, 64 GB RAM, 3 TB disk <br />
| style="border-bottom:1px dotted silver;" | Not ready (planned) <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-10-26<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CSIC/IFCA <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Alvaro Lopez Garcia <br />
| style="border-bottom:1px dotted silver;" | Pablo Orviz <br />
| style="border-bottom:1px dotted silver;" | IFCA-LCG2 <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Newton/Ocata/Pike/Queens <br />
| style="border-bottom:1px dotted silver;" | Upgrade to Ubuntu 18.04 and Pike during Q3 2018<br> <br />
| style="border-bottom:1px dotted silver;" | No<br> <br />
| style="border-bottom:1px dotted silver;" | KVM<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | <br />
[https://cloud.ifca.es:8787/ OCCI] <br />
<br />
[https://cloud.ifca.es:8774/ OpenStack] <br />
<br />
[https://cephrgw.ifca.es:8080/ Swift] <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
- 7744 Cores: <br />
<br />
&nbsp;&nbsp;&nbsp; - 32 nodes x 8 vcpus x 16GB RAM <br />
<br />
&nbsp;&nbsp;&nbsp; - 36 nodes x 24 vcpus x 48GM RAM <br />
<br />
&nbsp;&nbsp;&nbsp; - 34 nodes x 32 vcpus x 128GB RAM <br />
<br />
&nbsp;&nbsp;&nbsp; - 2 nodes x 80 vcpus x 1TB RAM<br> <br />
<br />
&nbsp;&nbsp;&nbsp; - 24 nodes x 32 vcpus x 32GB RAM <br />
<br />
&nbsp;&nbsp;&nbsp; - 144 nodes x 32 vcpus x 32GB RAM <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
cm4.4xlarge: 32 vCPUs, 160GB HD, 64GB RAM <br />
<br />
x1.20xlarge: 80 vCPUS, 100GB HD, 1TB RAM (upon request) <br />
<br />
GPU and Infiniband access (upon request)<br />
<br />
| style="border-bottom:1px dotted silver;" | Not ready (planned) <br />
| style="border-bottom:1px dotted silver;" | Yes <br> [https://wiki.egi.eu/wiki/Pay-for-use Price] <br />
| style="border-bottom:1px dotted silver;" | 2018-06-18<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
GRNET<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | GR <br />
| style="border-bottom:1px dotted silver;" | Kostas Koumantaros <br />
| style="border-bottom:1px dotted silver;" | &nbsp; Kyriakos Ginis <br />
| style="border-bottom:1px dotted silver;" | HG-09-Okeanos-Cloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | Synnefo <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://okeanos-occi2.hellasgrid.gr:9000/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 70 CPUs with 220 GB RAM <br />
<br />
<br> - 2 ΤΒ storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 24 Cores, 192GB RAM, 500GB HD&nbsp; <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Miroslav Dobrucky, Jan Astalos <br />
| style="border-bottom:1px dotted silver;" | IISAS-FedCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | Queens on Ubuntu18.04 <br />
| style="border-bottom:1px dotted silver;" | Planned with Queens release <br />
| style="border-bottom:1px dotted silver;" | KVM and LXD <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nova.ui.savba.sk:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 168 Cores with 1.5 GB RAM per core <br />
<br />
&nbsp;- 9 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.xlarge: 8 VCPUs, 16GB RAM, 160GB HD <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2017-11-30<br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Jan Astalos, Miroslav Dobrucky <br />
| style="border-bottom:1px dotted silver;" | IISAS-GPUCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | Queens on Ubuntu18.04 <br />
| style="border-bottom:1px dotted silver;" | Planned with Queens release <br />
| style="border-bottom:1px dotted silver;" | KVM and LXD <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nova3.ui.savba.sk:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 48 Cores with 4GB RAM per core, 6 GPUs K20m <br />
<br />
&nbsp;- 6 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <span style="color: rgb(51, 51, 51); font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 13px; line-height: 18.5714px; background-color: rgb(245, 245, 245);">gpu2cpu12</span><span style="font-size: 13.28px; line-height: 1.5em;">: 12 VCPUs, 48GB RAM, 200GB HD, 2 GPU Tesla K20m</span> <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | Yes <br> [https://wiki.egi.eu/wiki/Pay-for-use Price] <br />
| style="border-bottom:1px dotted silver;" | 2017-11-30<br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - Catania <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Roberto Barbera <br />
| style="border-bottom:1px dotted silver;" | Giuseppe Platania <br />
| style="border-bottom:1px dotted silver;" | INFN-CATANIA-STACK <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://stack-server-01.ct.infn.it:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 16 Cores with 65 GB RAM - 16 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | VM maximum size = m1.xlarge (16384 MB, 8 VCPU, 160 GB) <br />
| Ready <br />
| No<br />
| 2018-06-25<br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - Bari <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Marica Antonacci <br />
| style="border-bottom:1px dotted silver;" | Giacinto Donvito, Stefano Nicotri, Vincenzo Spinoso <br />
| style="border-bottom:1px dotted silver;" | RECAS-BARI (was:PRISMA-INFN-BARI) <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | Queens on Ubuntu 18.04. Planned but not scheduled yet <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [http://cloud.recas.ba.infn.it:8787/occi OCCI]<br>[http://cloud.recas.ba.infn.it:8774/v2.1/%(tenant_id)s Openstack Compute]<br>[http://cloud.recas.ba.infn.it:8080/v1/AUTH_%(tenant_id)s Openstack Object-Store]<br>[http://egi-cloud.recas.ba.infn.it/ Horizon dashboard with auth token] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 300 Cores with 600GB of RAM <br />
<br />
- 50 TB Storage. <br />
<br />
| style="border-bottom:1px dotted silver;" | Flavor "m1.xxlarge": 24 cores, 48GB RAM, 100 GB disk. Up to 500GB block storage can be attached on-demand. <br />
| style="border-bottom:1px dotted silver;" | Not ready yet but planned <br />
| style="border-bottom:1px dotted silver;" | Yes <br> [https://wiki.egi.eu/wiki/Pay-for-use Price] <br />
| style="border-bottom:1px dotted silver;" | 2018-07-04<br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN-Padova <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Marco Verlato <br />
| style="border-bottom:1px dotted silver;" | Federica Fanzago<br />
| style="border-bottom:1px dotted silver;" | [https://goc.egi.eu/portal/index.php?Page_Type=Site&id=1024 INFN-PADOVA-STACK] <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Rocky <br />
| style="border-bottom:1px dotted silver;" | Rocky is the current release.<br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | KVM<br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://egi-cloud.pd.infn.it/v3/ OpenStack Auth URL]<br>[https://egi-cloud.pd.infn.it:8443/dashboard OpenStack Horizon dashboard with EGI Check-in AAI]<br />
| style="border-bottom:1px dotted silver;" | <br />
- 184 Cores with 384 GB RAM <br />
<br />
- 2.2TB of overall block storage and 1.8TB of ephemeral storage per compute node <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
m1.hpc: 40 VCPUs, 90GB RAM, 200GB HD, up to 1TB attachable block storage <br />
<br />
Up to 24 public IPs <br />
<br />
| style="border-bottom:1px dotted silver;" | Not ready (planned) <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2019-02-04<br />
|-<br />
| style="border-bottom:1px dotted silver;" | TUBITAK ULAKBIM <br />
| style="border-bottom:1px dotted silver;" | TR <br />
| style="border-bottom:1px dotted silver;" | Feyza Eryol <br />
| style="border-bottom:1px dotted silver;" | Onur Temizsoylu <br />
| style="border-bottom:1px dotted silver;" | TR-FC1-ULAKBIM <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | NO<br> <br />
| style="border-bottom:1px dotted silver;" | KVM<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [http://fcctrl.ulakbim.gov.tr:8787/occi1.2 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 744 Cores with 3968 GB RAM&nbsp; <br />
<br />
- 40 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 4 CPUs, 16GB Memory, 40GB disk<br> <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | Yes <br> [https://wiki.egi.eu/wiki/Pay-for-use Price] <br />
| style="border-bottom:1px dotted silver;" | 2018-06-20<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | IPHC <br />
| style="border-bottom:1px dotted silver;" | FR <br />
| style="border-bottom:1px dotted silver;" | Jerome Pansanel <br />
| style="border-bottom:1px dotted silver;" | Sebastien Geiger <br />
| style="border-bottom:1px dotted silver;" | IN2P3-IRES <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Pike<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://sbgcloud.in2p3.fr:8787 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 480 Cores with 2336 GB RAM - 480 TB storage (Cinder / CEPH) <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.12xlarge-hugemem (CPU: 48, RAM: 512 GB, disk: 160 GB) <br> Monitoring&nbsp;: <br />
*https://cloudmon.egi.eu/nagios/cgi-bin/extinfo.cgi?type=1&amp;host=sbgbdii1.in2p3.fr <br />
*https://cloudmon.egi.eu/nagios/cgi-bin/extinfo.cgi?type=1&amp;host=sbgcloud.in2p3.fr<br />
<br />
| style="border-bottom:1px dotted silver;" | Planned for October 2018 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-05-31<br />
|-<br />
| style="border-bottom:1px dotted silver;" | NCG-INGRID-PT / LIP <br />
| style="border-bottom:1px dotted silver;" | PT <br />
| style="border-bottom:1px dotted silver;" | Mario David <br />
| style="border-bottom:1px dotted silver;" | Joao Pina, Joao Martins <br />
| style="border-bottom:1px dotted silver;" | NCG-INGRID-PT <br />
| style="border-bottom:1px dotted silver;" | <br />
Candidate <br />
<br />
May 2018 <br />
<br />
| style="border-bottom:1px dotted silver;" | Openstack Mitaka<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
Queens or "R", Ubuntu18.04 <br />
<br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | KVM and lxd[[]] <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [http://nimbus.ncg.ingrid.pt:8774/v2.1/%(tenant_id)s Openstack Compute]<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
- 80 Cores with 192 GB RAM <br />
<br />
- 3 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | Name m1.xlarge, VCPUs 8, Root Disk 160 GB, Ephemeral Disk 0 GB, Total Disk 160 GB, RAM 16,384 MB <br />
| style="border-bottom:1px dotted silver;" | Planned for second half of 2018 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 31.05.2018<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Polythecnic University of Valencia, I3M <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Ignacio Blanquer <br />
| style="border-bottom:1px dotted silver;" | Carlos de Alfonso <br />
| style="border-bottom:1px dotted silver;" | UPV-GRyCAP <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5 <br />
| style="border-bottom:1px dotted silver;" | Not planned yet. <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | NO <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://fc-one.i3m.upv.es:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 128 Cores with 192 GB RAM <br />
<br />
- 5 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 4 VCPUS, 16GB RAM, 160GB HD <br />
| style="border-bottom:1px dotted silver;" | By 2018 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-05-31<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Fraunhofer SCAI <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Horst Schwichtenberg <br />
| style="border-bottom:1px dotted silver;" | Andre Gemuend <br />
| style="border-bottom:1px dotted silver;" | SCAI <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | Upgrade to OpenStack Pike or Queens. Estimated upgrade Q4 2018.<br> <br />
| style="border-bottom:1px dotted silver;" | YES<br> <br />
| style="border-bottom:1px dotted silver;" | KVM<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://fc.scai.fraunhofer.de:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 128 physical cores + HT, 244 GB RAM <br />
<br />
- 20 TB Storage (Glance &amp; Cinder) <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 VCPUS, 16GB RAM, 160GB HD <br />
| style="border-bottom:1px dotted silver;" | Planned for Q4 2018 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 08.06.2018<br />
|-<br />
| style="border-bottom:1px dotted silver;" | BELSPO <br />
| style="border-bottom:1px dotted silver;" | BE<br> <br />
| style="border-bottom:1px dotted silver;" | Stephane GERARD <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | BEgrid-BELNET <br />
| style="border-bottom:1px dotted silver;" | Certified (2016-12) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5.4 <br />
| style="border-bottom:1px dotted silver;" | No OpenNebula upgrade planned. We seriously think about moving to OpenStack, but not for this year.<br> <br />
| style="border-bottom:1px dotted silver;" | Yes<br> <br />
| style="border-bottom:1px dotted silver;" | KVM<br> <br />
| style="border-bottom:1px dotted silver;" | YES<br> <br />
| style="border-bottom:1px dotted silver;" | [https://rocci.iihe.ac.be:11443/ OCCI]<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
- 160 physical cores + HT, 512 GB RAM <br />
<br />
- 10 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 16 VCPUs, 32GB of RAM, local HD 200 GB <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2017-12-11<br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Jan Astalos, Miroslav Dobrucky <br />
| style="border-bottom:1px dotted silver;" | IISAS-Nebula <br />
| style="border-bottom:1px dotted silver;" | Certified (2017-01-25) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5 <br />
| style="border-bottom:1px dotted silver;" | No plans.<br> <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | CMD-ONE-1 except occi-server (occi-server version installed is 2.0.0.alpha.1 which is needed for GPU support) <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | [https://nebula2.ui.savba.sk:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | 32 Cores with 4GB RAM per core and 4 GPUs K20m - Storage 147GB <br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 14 CPU cores, 2GPU, 56GB of RAM, 830GB HD <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2017-11-30<br />
|-<br />
| style="border-bottom:1px dotted silver;" | IFIN-HH <br />
| style="border-bottom:1px dotted silver;" | RO <br />
| style="border-bottom:1px dotted silver;" | Ionut Vasile <br />
| style="border-bottom:1px dotted silver;" | Dragos Ciobanu-Zabet <br />
| style="border-bottom:1px dotted silver;" | CLOUDIFIN <br />
| style="border-bottom:1px dotted silver;" | Certified (2017-03-01) <br />
| style="border-bottom:1px dotted silver;" | OpenStack Pike <br />
| style="border-bottom:1px dotted silver;" | Upgrade to OpenStack Queens or Rocky, whichever is supported at the time of the upgrade. Estimated upgrade period Q2 2019. <br />
| style="border-bottom:1px dotted silver;" | yes <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | Yes<br> <br />
| style="border-bottom:1px dotted silver;" | [http://cloud-ctrl.nipne.ro:8787/occi1.1 OCCI] <br />
| style="border-bottom:1px dotted silver;" | 96 Cores with 384 GB RAM <br>- 2TB Storage <br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.large(VCPUs 4, 8GB RAM, Root Disk 80GB)<br> m1.medium(VCPUs 2, 4GB RAM, Root Disk 40GB)<br> m1.nano(VCPUs 1, 64MB RAM)<br> m1.small(VCPUs 1, 2GB RAM, Root Disk 20GB)<br> m1.tiny(VCPUs 1, 512MB RAM, Root Disk 1GB)<br> m1.xlarge(VCPUs 8, 16GB RAM, Root Disk 160GB) <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-09-20<br />
|-<br />
| style="border-bottom:1px dotted silver;" | BITP <br />
| style="border-bottom:1px dotted silver;" | UA <br />
| style="border-bottom:1px dotted silver;" | Volodymyr Yurchenko <br />
| style="border-bottom:1px dotted silver;" | Volodymyr Yurchenko <br />
| style="border-bottom:1px dotted silver;" | UA-BITP <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Newton <br />
| style="border-bottom:1px dotted silver;" | Upgrade to OpenStack Queens during July-August 2018 (not possible until now due to quite intensive resource usage); a downtime will be needed <br />
| style="border-bottom:1px dotted silver;" | Yes<br> <br />
| style="border-bottom:1px dotted silver;" | KVM<br> <br />
| style="border-bottom:1px dotted silver;" | Yes<br> <br />
| style="border-bottom:1px dotted silver;" | [http://cloud-main.bitp.kiev.ua:8787 OCCI] <br> [https://cloud-main.bitp.kiev.ua:5001/v2.0 OpenStack] <br />
| style="border-bottom:1px dotted silver;" | Total 56 virtual cores (28 physical CPUs), 62 GB RAM, 2.9 TB storage, <br />
all available for fedcloud.egi.eu through an SLA <br> <br />
<br />
| style="border-bottom:1px dotted silver;" | Max size of VM: 8 cores, 16 GB RAM, 160 GB storage <br />
| style="border-bottom:1px dotted silver;" | Planned for July-August 2018 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-06-18-12-11<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | INAF-TRIESTE-STACK <br />
| style="border-bottom:1px dotted silver;" | Candidate (13-11-2017) <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Yes/No <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|}<br />
<br />
== Suspended or closed sites ==<br />
<br />
'''Last update: November 2017''' <br />
<br />
{| cellspacing="0" cellpadding="5" style="border:1px solid black; text-align:left;" class="wikitable sortable"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | Affiliation <br />
! style="border-bottom:1px solid black;" | CC <br />
! style="border-bottom:1px solid black;" | Main contact <br />
! style="border-bottom:1px solid black;" | Deputies <br />
! style="border-bottom:1px solid black;" | Host Site <br />
! style="border-bottom:1px solid black;" | Production status <br />
! style="border-bottom:1px solid black;" | CMF Version <br />
! style="border-bottom:1px solid black;" | Supporting fedcloud.egi.eu <br />
! style="border-bottom:1px solid black;" | Resource Endpoints <br />
! style="border-bottom:1px solid black;" | Committed Production Launch resources <br />
! style="border-bottom:1px solid black;" | VM maximum size (memory, cpu, storage) <br />
! style="border-bottom:1px solid black;" | IPv6 readiness of the network at the site <br />
! style="border-bottom:1px solid black;" | Last update timestamp YYY-MM-DD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
UNIZAR / BIFI <br />
<br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Ruben Valles <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | BIFI <br />
| style="border-bottom:1px dotted silver;" | Suspended (2017-08-01) <br />
| style="border-bottom:1px dotted silver;" | OpenStack (Newton / Icehouse) <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | <br />
[http://server4-ciencias.bifi.unizar.es:8787 OCCI] (Newton)<br> [http://server4-eupt.unizar.es:8787 OCCI] (Icehouse) <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
- 720 Cores with 740 GB RAM <br />
<br />
- 10 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
'''Newton''': m1.xlarge, VCPUs 8, Root Disk 20 GB, Ephemeral Disk 0 GB, Total Disk 20 GB, RAM 16,384 MB '''Icehouse''': Name m1.xlarge, VCPUs 8, Root Disk 160 GB, Ephemeral Disk 0 GB, Total Disk 160 GB, RAM 16,384 MB <br />
<br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" rowspan="3" | INFN - Catania <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | IT <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | Roberto Barbera <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | Giuseppe La Rocca, Diego Scardac, Giuseppe Platania <br />
| style="border-bottom:1px dotted silver;" | INFN-CATANIA-NEBULA <br />
| style="border-bottom:1px dotted silver;" | Suspended (2017-04-27) https://ggus.eu/index.php?mode=ticket_info&amp;ticket_id=126716 <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nebula-server-01.ct.infn.it:9000/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 16 cores, 64 GB RAM <br />
<br />
- 5.4 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | INDIGO-CATANIA-STACK <br />
| style="border-bottom:1px dotted silver;" | Closed (2017-05-02) <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | CETA-CIEMAT <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Guillermo Díaz Herrero, <br />
| style="border-bottom:1px dotted silver;" | <br />
Alfonso Pardo Diaz <br />
<br />
Abel Francisco Paz<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | CETA-GRID <br />
| style="border-bottom:1px dotted silver;" | Cloud resources dismissed (2017-06-09) <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | FZ Jülich <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Björn Hagemeier <br />
| style="border-bottom:1px dotted silver;" | Shahbaz Memon <br />
| style="border-bottom:1px dotted silver;" | FZJ <br />
| style="border-bottom:1px dotted silver;" | Cloud resources dismissed (Dec 2017)<br />
|-<br />
| style="border-bottom:1px dotted silver;" | UKIM <br />
| style="border-bottom:1px dotted silver;" | MK <br />
| style="border-bottom:1px dotted silver;" | Boro Jakimovski <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | MK-04-FINKICLOUD <br />
| style="border-bottom:1px dotted silver;" | Suspended (2017-10-12) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5.2 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nebula.finki.ukim.mk:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 100 Cores with 24 GB RAM - 1 TB Storage <br />
<br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 4 CPUs, 8GB Memory, 200GB disk<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|}<br />
<br />
[[Category:Federated_Cloud]]</div>
Patrykl
https://wiki.egi.eu/w/index.php?title=Federated_Cloud_infrastructure_status&diff=99752
Federated Cloud infrastructure status
2018-07-11T14:08:03Z
<p>Patrykl: /* Status of the Federated Cloud */</p>
<hr />
<div>{{Fedcloud_Menu}} {{TOC_right}} <br />
<br />
The purposes of this page are <br />
<br />
*providing a snapshot of the resources that are provided by the Federated Cloud infrastructure <br />
*providing information about the sites that are joining, or have expressed interest in joining the FedCloud <br />
*providing the list of sites supporting the fedcloud.egi.eu VO, which is the VO used to allow the evaluation of the FedCloud infrastructure by a given provider<br />
<br />
If you have any comments on the content of this page, please contact '''operations @ egi.eu'''. <br />
<br />
== CMF support highlights ==<br />
<br />
'''Last update: April 2018''' <br />
<br />
{| cellspacing="1" cellpadding="1" width="1553" border="1"<br />
|-<br />
! scope="col" | CMF <br />
! scope="col" | Version <br />
! scope="col" | Comments<br />
|-<br />
| OpenStack <br />
| bgcolor="#66ff99" align="center" | &gt;=Ocata <br />
| here https://releases.openstack.org/ you find that Ocata and above are "Maintained", meaning "Approximately 18 months" supported since the release date. For instance, for Ocata, released on 2017-02-22, this means approximately until August 2018<br />
|-<br />
| OpenStack <br />
| bgcolor="#66ff99" align="center" | Mitaka/Ubuntu LTS <br />
| some RCs are also using Mitaka from Ubuntu LTS, which is supported for 5 years. Due to project constraints, some of these could decide to switch to a new version, or stay with the current Mitaka/Ubuntu set up as planned last year; so '''we will ask about plans once again, to confirm they will stay with Mitaka or otherwise'''<br />
|-<br />
| OpenStack <br />
| bgcolor="#ff6666" align="center" | &lt;Ocata other than Mitaka/Ubuntu LTS <br />
| there are situations where an outdated version of OpenStack is used. '''These RCs are violating the [https://documents.egi.eu/public/ShowDocument?docid=669 Service Operations Security Policy] which states that resource centres SHOULD NOT run unsupported software in their production infrastructure. We will open tickets against those sites to plan an upgrade'''<br />
|-<br />
| OpenNebula <br />
| bgcolor="#ff6666" align="center" | 4 <br />
| no longer supported, OpenNebula RCs need to update to OpenNebula 5. '''These RCs are violating the the [https://documents.egi.eu/public/ShowDocument?docid=669 Service Operations Security Policy] which states that resource centres SHOULD NOT run unsupported software in their production infrastructure. We will open tickets against those sites to plan an upgrade'''<br />
|-<br />
| OpenNebula <br />
| bgcolor="#66ff99" align="center" | 5 <br />
| Supported<br />
|-<br />
| Synnefo <br />
| bgcolor="#66ff99" align="center" | <br> <br />
| Supported<br />
|}<br />
<br />
== Status of the Federated Cloud ==<br />
<br />
The table here shows all Resource Centres fully integrated into the Federated Cloud infrastructure and certified through the [[PROC09|EGI Resource Centre Registration and Certification]]. <br />
<br />
The status of all the services is monitored via the [https://argo-mon.egi.eu/nagios/ argo-mon.egi.eu nagios instance] and [https://argo-mon2.egi.eu/nagios/ argo-mon2.egi.eu nagios instance]. <br />
<br />
Details on Resource Centres availability and reliability are available on [http://argo.egi.eu/lavoisier/cloud_reports?accept=html ARGO]. <br />
<br />
Accounting data are available on the [http://accounting.egi.eu/cloud.php EGI Accounting Portal] or in the [http://accounting-devel.egi.eu/cloud.php accounting portal dev instance]. <br />
<br />
'''Last update: December 2017 - UPDATE&nbsp;IN&nbsp;PROGRESS http://go.egi.eu/fedcloudstatus''' <br />
<br />
{| style="border:1px solid black; text-align:left;" class="wikitable sortable" cellspacing="0" cellpadding="5"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | Affiliation <br />
! style="border-bottom:1px solid black;" | CC <br />
! style="border-bottom:1px solid black;" | Main contact <br />
! style="border-bottom:1px solid black;" | Deputies <br />
! style="border-bottom:1px solid black;" | Host Site <br />
! style="border-bottom:1px solid black;" | Production status <br />
! style="border-bottom:1px solid black;" | CMF Version <br />
! style="border-bottom:1px solid black;" | CMF Upgrade plans <br />
! style="border-bottom:1px solid black;" | Using CMD <br />
! style="border-bottom:1px solid black;" | KVM/XEN? <br />
! style="border-bottom:1px solid black;" | Supporting fedcloud.egi.eu <br />
! style="border-bottom:1px solid black;" | Resource Endpoints <br />
! style="border-bottom:1px solid black;" | Committed Production Launch resources <br />
! style="border-bottom:1px solid black;" | VM maximum size (memory, cpu, storage) <br />
! style="border-bottom:1px solid black;" | IPv6 readiness of the network at the site <br />
! style="border-bottom:1px solid black;" | Pay-per-use <br />
! style="border-bottom:1px solid black;" | Last update timestamp YYYY-MM-DD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
100 Percent IT Ltd <br />
<br />
| style="border-bottom:1px dotted silver;" | UK <br />
| style="border-bottom:1px dotted silver;" | David Blundell <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | 100IT <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka&nbsp; <br />
| style="border-bottom:1px dotted silver;" | Queens on Ubuntu18.04 <br />
| style="border-bottom:1px dotted silver;" | Planned with Queens release <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://occi-api.100percentit.com:8787/occi1.1 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 120 Cores with 128 GB RAM <br />
<br />
- 16TB Shared storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 cores, 32 GB of RAM, up to 1TB attachable block storage<br> <br />
| style="border-bottom:1px dotted silver;" | Network ready, integration planned for Queens release <br />
| style="border-bottom:1px dotted silver;" | Yes <br> [https://wiki.egi.eu/wiki/Pay-for-use Price] <br />
| style="border-bottom:1px dotted silver;" | 2017-12-04<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Cyfronet (NGI PL) <br />
| style="border-bottom:1px dotted silver;" | PL <br />
| style="border-bottom:1px dotted silver;" | Tomasz Szepieniec <br />
| style="border-bottom:1px dotted silver;" | Patryk Lasoń, Łukasz Flis <br />
| style="border-bottom:1px dotted silver;" | [https://next.gocdb.eu/portal/index.php?Page_Type=Site&id=966 CYFRONET-CLOUD] <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Juno <br />
| style="border-bottom:1px dotted silver;" | Queens in August 2018 <br />
| style="border-bottom:1px dotted silver;" | Yes <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://control.cloud.cyfronet.pl:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 200 Cores with 400 GB RAM <br />
<br />
- 5 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 cores, 16 GB of RAM, 10 GB <br />
| style="border-bottom:1px dotted silver;" | Not ready (planned) <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-07-11<br />
|-<br />
| style="border-bottom:1px dotted silver;" | FCTSG <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Rubén Díez <br />
| style="border-bottom:1px dotted silver;" | Iván Díaz <br />
| style="border-bottom:1px dotted silver;" | CESGA <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5.2.1 <br />
| style="border-bottom:1px dotted silver;" | No in near future. Current version is compatible with CMD.<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
Yes (CMD-ONE-1) <br />
<br />
Except occi-server, which is 2.0.0.alpha.1 <br />
<br />
| style="border-bottom:1px dotted silver;" | KVM<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://cloud.cesga.es:3202/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- <span class="right" id="dashboard_host_allocated_cpu_str">192 Cores total<br />
</span> <br />
<br />
<span class="right">- </span><span class="right" id="dashboard_host_allocated_mem_str">340 GB of RAM total<br />
</span><br> <br />
<br />
<span class="right">- </span>Two data stores of 3TB and 700GB. <br />
<br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 cores, 16GB of RAM, local HD 100 GB <br />
| style="border-bottom:1px dotted silver;" | &nbsp; Ready (network level) <br />
| style="border-bottom:1px dotted silver;" | Yes <br> [https://wiki.egi.eu/wiki/Pay-for-use Price] <br />
| style="border-bottom:1px dotted silver;" | 2018-04-23<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CESNET <br />
| style="border-bottom:1px dotted silver;" | CZ <br />
| style="border-bottom:1px dotted silver;" | Miroslav Ruda <br />
| style="border-bottom:1px dotted silver;" | Boris Parak <br />
| style="border-bottom:1px dotted silver;" | CESNET-MetaCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4.14.2 <br />
| style="border-bottom:1px dotted silver;" | Migration to OpenStack Queens/Rocky with OpenID Connect authN/authZ. ETA Q1/2019. <br />
| style="border-bottom:1px dotted silver;" | NO <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://carach5.ics.muni.cz:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 416 Cores with 2.4 TB RAM <br />
<br />
- 56.6 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 32 cores, 185 GB of RAM, approx. 3 TB of attached storage <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-05-31<br />
|-<br />
| style="border-bottom:1px dotted silver;" | GWDG <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Piotr Kasprzak <br />
| style="border-bottom:1px dotted silver;" | Ramin Yahyapour, Philipp Wieder <br />
| style="border-bottom:1px dotted silver;" | GoeGrid <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4.14.2 <br />
| style="border-bottom:1px dotted silver;" | Migration to OpenStack Mitaka in week 27 / 28<br />
| style="border-bottom:1px dotted silver;" | NO<br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://occi.cloud.gwdg.de:3100/ OCCI]<br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 192 Cores with 768 GB RAM <br />
<br />
&nbsp;- 40 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 64 cores, 240 GB RAM, 3 TB disk <br />
| style="border-bottom:1px dotted silver;" | Not ready (planned) <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-07-03<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CSIC/IFCA <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Alvaro Lopez Garcia <br />
| style="border-bottom:1px dotted silver;" | Pablo Orviz <br />
| style="border-bottom:1px dotted silver;" | IFCA-LCG2 <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Newton/Ocata/Pike/Queens <br />
| style="border-bottom:1px dotted silver;" | Upgrade to Ubuntu 18.04 and Pike during Q3 2018<br> <br />
| style="border-bottom:1px dotted silver;" | No<br> <br />
| style="border-bottom:1px dotted silver;" | KVM<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | <br />
[https://cloud.ifca.es:8787/ OCCI] <br />
<br />
[https://cloud.ifca.es:8774/ OpenStack] <br />
<br />
[https://cephrgw.ifca.es:8080/ Swift] <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
- 7744 Cores: <br />
<br />
&nbsp;&nbsp;&nbsp; - 32 nodes x 8 vcpus x 16GB RAM <br />
<br />
&nbsp;&nbsp;&nbsp; - 36 nodes x 24 vcpus x 48GM RAM <br />
<br />
&nbsp;&nbsp;&nbsp; - 34 nodes x 32 vcpus x 128GB RAM <br />
<br />
&nbsp;&nbsp;&nbsp; - 2 nodes x 80 vcpus x 1TB RAM<br> <br />
<br />
&nbsp;&nbsp;&nbsp; - 24 nodes x 32 vcpus x 32GB RAM <br />
<br />
&nbsp;&nbsp;&nbsp; - 144 nodes x 32 vcpus x 32GB RAM <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
cm4.4xlarge: 32 vCPUs, 160GB HD, 64GB RAM <br />
<br />
x1.20xlarge: 80 vCPUS, 100GB HD, 1TB RAM (upon request) <br />
<br />
GPU and Infiniband access (upon request)<br />
<br />
| style="border-bottom:1px dotted silver;" | Not ready (planned) <br />
| style="border-bottom:1px dotted silver;" | Yes <br> [https://wiki.egi.eu/wiki/Pay-for-use Price] <br />
| style="border-bottom:1px dotted silver;" | 2018-06-18<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
GRNET<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | GR <br />
| style="border-bottom:1px dotted silver;" | Kostas Koumantaros <br />
| style="border-bottom:1px dotted silver;" | &nbsp; Kyriakos Ginis <br />
| style="border-bottom:1px dotted silver;" | HG-09-Okeanos-Cloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | Synnefo <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://okeanos-occi2.hellasgrid.gr:9000/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 70 CPUs with 220 GB RAM <br />
<br />
<br> - 2 ΤΒ storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 24 Cores, 192GB RAM, 500GB HD&nbsp; <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Miroslav Dobrucky, Jan Astalos <br />
| style="border-bottom:1px dotted silver;" | IISAS-FedCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | Queens on Ubuntu18.04 <br />
| style="border-bottom:1px dotted silver;" | Planned with Queens release <br />
| style="border-bottom:1px dotted silver;" | KVM and LXD <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nova.ui.savba.sk:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 168 Cores with 1.5 GB RAM per core <br />
<br />
&nbsp;- 9 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.xlarge: 8 VCPUs, 16GB RAM, 160GB HD <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2017-11-30<br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Jan Astalos, Miroslav Dobrucky <br />
| style="border-bottom:1px dotted silver;" | IISAS-GPUCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | Queens on Ubuntu18.04 <br />
| style="border-bottom:1px dotted silver;" | Planned with Queens release <br />
| style="border-bottom:1px dotted silver;" | KVM and LXD <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nova3.ui.savba.sk:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 48 Cores with 4GB RAM per core, 6 GPUs K20m <br />
<br />
&nbsp;- 6 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <span style="color: rgb(51, 51, 51); font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 13px; line-height: 18.5714px; background-color: rgb(245, 245, 245);">gpu2cpu12</span><span style="font-size: 13.28px; line-height: 1.5em;">: 12 VCPUs, 48GB RAM, 200GB HD, 2 GPU Tesla K20m</span> <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | Yes <br> [https://wiki.egi.eu/wiki/Pay-for-use Price] <br />
| style="border-bottom:1px dotted silver;" | 2017-11-30<br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - Catania <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Roberto Barbera <br />
| style="border-bottom:1px dotted silver;" | Giuseppe Platania <br />
| style="border-bottom:1px dotted silver;" | INFN-CATANIA-STACK <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://stack-server-01.ct.infn.it:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 16 Cores with 65 GB RAM - 16 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | VM maximum size = m1.xlarge (16384 MB, 8 VCPU, 160 GB) <br />
| Ready <br />
| No<br />
| 2018-06-25<br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - Bari <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Marica Antonacci <br />
| style="border-bottom:1px dotted silver;" | Giacinto Donvito, Stefano Nicotri, Vincenzo Spinoso <br />
| style="border-bottom:1px dotted silver;" | RECAS-BARI (was:PRISMA-INFN-BARI) <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | Queens on Ubuntu 18.04. Planned but not scheduled yet <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [http://cloud.recas.ba.infn.it:8787/occi OCCI]<br>[http://cloud.recas.ba.infn.it:8774/v2.1/%(tenant_id)s Openstack Compute]<br>[http://cloud.recas.ba.infn.it:8080/v1/AUTH_%(tenant_id)s Openstack Object-Store]<br>[http://egi-cloud.recas.ba.infn.it/ Horizon dashboard with auth token] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 300 Cores with 600GB of RAM <br />
<br />
- 50 TB Storage. <br />
<br />
| style="border-bottom:1px dotted silver;" | Flavor "m1.xxlarge": 24 cores, 48GB RAM, 100 GB disk. Up to 500GB block storage can be attached on-demand. <br />
| style="border-bottom:1px dotted silver;" | Not ready yet but planned <br />
| style="border-bottom:1px dotted silver;" | Yes <br> [https://wiki.egi.eu/wiki/Pay-for-use Price] <br />
| style="border-bottom:1px dotted silver;" | 2018-07-04<br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN-Padova <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Marco Verlato <br />
| style="border-bottom:1px dotted silver;" | Federica Fanzago<br />
| style="border-bottom:1px dotted silver;" | [https://next.gocdb.eu/portal/index.php?Page_Type=Site&id=1024 INFN-PADOVA-STACK] <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Newton <br />
| style="border-bottom:1px dotted silver;" | Upgrade to Queens in Q4<br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | KVM<br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://egi-cloud.pd.infn.it:8787/occi1.1/ OCCI]<br>[https://egi-cloud.pd.infn.it/dashboard Horizon dashboard with West-Life/WeNMR SSO and INDIGO-IAM]<br />
| style="border-bottom:1px dotted silver;" | <br />
- 120 Cores with 240 GB RAM <br />
<br />
- 2.2TB of overall block storage and 1.8TB of ephemeral storage per compute node <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
m1.hpc: 24 VCPUs, 46GB RAM, 160GB HD, up to 1TB attachable block storage <br />
<br />
Up to 24 public IPs <br />
<br />
| style="border-bottom:1px dotted silver;" | Not ready (planned) <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-07-02<br />
|-<br />
| style="border-bottom:1px dotted silver;" | TUBITAK ULAKBIM <br />
| style="border-bottom:1px dotted silver;" | TR <br />
| style="border-bottom:1px dotted silver;" | Feyza Eryol <br />
| style="border-bottom:1px dotted silver;" | Onur Temizsoylu <br />
| style="border-bottom:1px dotted silver;" | TR-FC1-ULAKBIM <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | NO<br> <br />
| style="border-bottom:1px dotted silver;" | KVM<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [http://fcctrl.ulakbim.gov.tr:8787/occi1.2 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 744 Cores with 3968 GB RAM&nbsp; <br />
<br />
- 40 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 4 CPUs, 16GB Memory, 40GB disk<br> <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | Yes <br> [https://wiki.egi.eu/wiki/Pay-for-use Price] <br />
| style="border-bottom:1px dotted silver;" | 2018-06-20<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | IPHC <br />
| style="border-bottom:1px dotted silver;" | FR <br />
| style="border-bottom:1px dotted silver;" | Jerome Pansanel <br />
| style="border-bottom:1px dotted silver;" | Sebastien Geiger <br />
| style="border-bottom:1px dotted silver;" | IN2P3-IRES <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Pike<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://sbgcloud.in2p3.fr:8787 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 480 Cores with 2336 GB RAM - 480 TB storage (Cinder / CEPH) <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.12xlarge-hugemem (CPU: 48, RAM: 512 GB, disk: 160 GB) <br> Monitoring&nbsp;: <br />
*https://cloudmon.egi.eu/nagios/cgi-bin/extinfo.cgi?type=1&amp;host=sbgbdii1.in2p3.fr <br />
*https://cloudmon.egi.eu/nagios/cgi-bin/extinfo.cgi?type=1&amp;host=sbgcloud.in2p3.fr<br />
<br />
| style="border-bottom:1px dotted silver;" | Planned for October 2018 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-05-31<br />
|-<br />
| style="border-bottom:1px dotted silver;" | NCG-INGRID-PT / LIP <br />
| style="border-bottom:1px dotted silver;" | PT <br />
| style="border-bottom:1px dotted silver;" | Mario David <br />
| style="border-bottom:1px dotted silver;" | Joao Pina, Joao Martins <br />
| style="border-bottom:1px dotted silver;" | NCG-INGRID-PT <br />
| style="border-bottom:1px dotted silver;" | <br />
Candidate <br />
<br />
May 2018 <br />
<br />
| style="border-bottom:1px dotted silver;" | Openstack Mitaka<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
Queens or "R", Ubuntu18.04 <br />
<br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | KVM and lxd[[]] <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [http://nimbus.ncg.ingrid.pt:8774/v2.1/%(tenant_id)s Openstack Compute]<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
- 80 Cores with 192 GB RAM <br />
<br />
- 3 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | Name m1.xlarge, VCPUs 8, Root Disk 160 GB, Ephemeral Disk 0 GB, Total Disk 160 GB, RAM 16,384 MB <br />
| style="border-bottom:1px dotted silver;" | Planned for second half of 2018 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 31.05.2018<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Polythecnic University of Valencia, I3M <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Ignacio Blanquer <br />
| style="border-bottom:1px dotted silver;" | Carlos de Alfonso <br />
| style="border-bottom:1px dotted silver;" | UPV-GRyCAP <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5 <br />
| style="border-bottom:1px dotted silver;" | Not planned yet. <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | NO <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://fc-one.i3m.upv.es:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 128 Cores with 192 GB RAM <br />
<br />
- 5 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 4 VCPUS, 16GB RAM, 160GB HD <br />
| style="border-bottom:1px dotted silver;" | By 2018 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-05-31<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Fraunhofer SCAI <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Horst Schwichtenberg <br />
| style="border-bottom:1px dotted silver;" | Andre Gemuend <br />
| style="border-bottom:1px dotted silver;" | SCAI <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | Upgrade to OpenStack Pike or Queens. Estimated upgrade Q4 2018.<br> <br />
| style="border-bottom:1px dotted silver;" | YES<br> <br />
| style="border-bottom:1px dotted silver;" | KVM<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://fc.scai.fraunhofer.de:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 128 physical cores + HT, 244 GB RAM <br />
<br />
- 20 TB Storage (Glance &amp; Cinder) <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 VCPUS, 16GB RAM, 160GB HD <br />
| style="border-bottom:1px dotted silver;" | Planned for Q4 2018 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 08.06.2018<br />
|-<br />
| style="border-bottom:1px dotted silver;" | BELSPO <br />
| style="border-bottom:1px dotted silver;" | BE<br> <br />
| style="border-bottom:1px dotted silver;" | Stephane GERARD <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | BEgrid-BELNET <br />
| style="border-bottom:1px dotted silver;" | Certified (2016-12) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5.4 <br />
| style="border-bottom:1px dotted silver;" | No OpenNebula upgrade planned. We seriously think about moving to OpenStack, but not for this year.<br> <br />
| style="border-bottom:1px dotted silver;" | Yes<br> <br />
| style="border-bottom:1px dotted silver;" | KVM<br> <br />
| style="border-bottom:1px dotted silver;" | YES<br> <br />
| style="border-bottom:1px dotted silver;" | [https://rocci.iihe.ac.be:11443/ OCCI]<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
- 160 physical cores + HT, 512 GB RAM <br />
<br />
- 10 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 16 VCPUs, 32GB of RAM, local HD 200 GB <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2017-12-11<br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Jan Astalos, Miroslav Dobrucky <br />
| style="border-bottom:1px dotted silver;" | IISAS-Nebula <br />
| style="border-bottom:1px dotted silver;" | Certified (2017-01-25) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5 <br />
| style="border-bottom:1px dotted silver;" | No plans.<br> <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | CMD-ONE-1 except occi-server (occi-server version installed is 2.0.0.alpha.1 which is needed for GPU support) <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | [https://nebula2.ui.savba.sk:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | 32 Cores with 4GB RAM per core and 4 GPUs K20m - Storage 147GB <br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 14 CPU cores, 2GPU, 56GB of RAM, 830GB HD <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2017-11-30<br />
|-<br />
| style="border-bottom:1px dotted silver;" | IFIN-HH <br />
| style="border-bottom:1px dotted silver;" | RO <br />
| style="border-bottom:1px dotted silver;" | Ionut Vasile <br />
| style="border-bottom:1px dotted silver;" | Dragos Ciobanu-Zabet <br />
| style="border-bottom:1px dotted silver;" | CLOUDIFIN <br />
| style="border-bottom:1px dotted silver;" | Certified (2017-03-01) <br />
| style="border-bottom:1px dotted silver;" | OpenStack Newton <br />
| style="border-bottom:1px dotted silver;" | Upgrade to OpenStack Pike or Queens, whichever is supported at the time of the upgrade. Estimated upgrade Q4 2018. <br />
| style="border-bottom:1px dotted silver;" | yes <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | Yes<br> <br />
| style="border-bottom:1px dotted silver;" | [http://cloudctrl.nipne.ro:8787/occi1.1 OCCI] <br />
| style="border-bottom:1px dotted silver;" | 96 Cores with 384 GB RAM <br>- 2TB Storage <br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.large(VCPUs 4, 8GB RAM, Root Disk 80GB)<br> m1.medium(VCPUs 2, 4GB RAM, Root Disk 40GB)<br> m1.nano(VCPUs 1, 64MB RAM)<br> m1.small(VCPUs 1, 2GB RAM, Root Disk 20GB)<br> m1.tiny(VCPUs 1, 512MB RAM, Root Disk 1GB)<br> m1.xlarge(VCPUs 8, 16GB RAM, Root Disk 160GB) <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-05-31<br />
|-<br />
| style="border-bottom:1px dotted silver;" | BITP <br />
| style="border-bottom:1px dotted silver;" | UA <br />
| style="border-bottom:1px dotted silver;" | Volodymyr Yurchenko <br />
| style="border-bottom:1px dotted silver;" | Volodymyr Yurchenko <br />
| style="border-bottom:1px dotted silver;" | UA-BITP <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Newton <br />
| style="border-bottom:1px dotted silver;" | Upgrade to OpenStack Queens during July-August 2018 (not possible until now due to quite intensive resource usage); a downtime will be needed <br />
| style="border-bottom:1px dotted silver;" | Yes<br> <br />
| style="border-bottom:1px dotted silver;" | KVM<br> <br />
| style="border-bottom:1px dotted silver;" | Yes<br> <br />
| style="border-bottom:1px dotted silver;" | [http://cloud-main.bitp.kiev.ua:8787 OCCI] <br> [https://cloud-main.bitp.kiev.ua:5001/v2.0 OpenStack] <br />
| style="border-bottom:1px dotted silver;" | Total 56 virtual cores (28 physical CPUs), 62 GB RAM, 2.9 TB storage, <br />
all available for fedcloud.egi.eu through an SLA <br> <br />
<br />
| style="border-bottom:1px dotted silver;" | Max size of VM: 8 cores, 16 GB RAM, 160 GB storage <br />
| style="border-bottom:1px dotted silver;" | Planned for July-August 2018 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-06-18-12-11<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | INAF-TRIESTE-STACK <br />
| style="border-bottom:1px dotted silver;" | Candidate (13-11-2017) <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Yes/No <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|}<br />
<br />
== Suspended or closed sites ==<br />
<br />
'''Last update: November 2017''' <br />
<br />
{| cellspacing="0" cellpadding="5" style="border:1px solid black; text-align:left;" class="wikitable sortable"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | Affiliation <br />
! style="border-bottom:1px solid black;" | CC <br />
! style="border-bottom:1px solid black;" | Main contact <br />
! style="border-bottom:1px solid black;" | Deputies <br />
! style="border-bottom:1px solid black;" | Host Site <br />
! style="border-bottom:1px solid black;" | Production status <br />
! style="border-bottom:1px solid black;" | CMF Version <br />
! style="border-bottom:1px solid black;" | Supporting fedcloud.egi.eu <br />
! style="border-bottom:1px solid black;" | Resource Endpoints <br />
! style="border-bottom:1px solid black;" | Committed Production Launch resources <br />
! style="border-bottom:1px solid black;" | VM maximum size (memory, cpu, storage) <br />
! style="border-bottom:1px solid black;" | IPv6 readiness of the network at the site <br />
! style="border-bottom:1px solid black;" | Last update timestamp YYY-MM-DD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
UNIZAR / BIFI <br />
<br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Ruben Valles <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | BIFI <br />
| style="border-bottom:1px dotted silver;" | Suspended (2017-08-01) <br />
| style="border-bottom:1px dotted silver;" | OpenStack (Newton / Icehouse) <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | <br />
[http://server4-ciencias.bifi.unizar.es:8787 OCCI] (Newton)<br> [http://server4-eupt.unizar.es:8787 OCCI] (Icehouse) <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
- 720 Cores with 740 GB RAM <br />
<br />
- 10 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
'''Newton''': m1.xlarge, VCPUs 8, Root Disk 20 GB, Ephemeral Disk 0 GB, Total Disk 20 GB, RAM 16,384 MB '''Icehouse''': Name m1.xlarge, VCPUs 8, Root Disk 160 GB, Ephemeral Disk 0 GB, Total Disk 160 GB, RAM 16,384 MB <br />
<br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" rowspan="3" | INFN - Catania <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | IT <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | Roberto Barbera <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | Giuseppe La Rocca, Diego Scardac, Giuseppe Platania <br />
| style="border-bottom:1px dotted silver;" | INFN-CATANIA-NEBULA <br />
| style="border-bottom:1px dotted silver;" | Suspended (2017-04-27) https://ggus.eu/index.php?mode=ticket_info&amp;ticket_id=126716 <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nebula-server-01.ct.infn.it:9000/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 16 cores, 64 GB RAM <br />
<br />
- 5.4 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | INDIGO-CATANIA-STACK <br />
| style="border-bottom:1px dotted silver;" | Closed (2017-05-02) <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | CETA-CIEMAT <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Guillermo Díaz Herrero, <br />
| style="border-bottom:1px dotted silver;" | <br />
Alfonso Pardo Diaz <br />
<br />
Abel Francisco Paz<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | CETA-GRID <br />
| style="border-bottom:1px dotted silver;" | Cloud resources dismissed (2017-06-09) <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | FZ Jülich <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Björn Hagemeier <br />
| style="border-bottom:1px dotted silver;" | Shahbaz Memon <br />
| style="border-bottom:1px dotted silver;" | FZJ <br />
| style="border-bottom:1px dotted silver;" | Cloud resources dismissed (Dec 2017)<br />
|-<br />
| style="border-bottom:1px dotted silver;" | UKIM <br />
| style="border-bottom:1px dotted silver;" | MK <br />
| style="border-bottom:1px dotted silver;" | Boro Jakimovski <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | MK-04-FINKICLOUD <br />
| style="border-bottom:1px dotted silver;" | Suspended (2017-10-12) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5.2 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nebula.finki.ukim.mk:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 100 Cores with 24 GB RAM - 1 TB Storage <br />
<br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 4 CPUs, 8GB Memory, 200GB disk<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|}<br />
<br />
[[Category:Federated_Cloud]]</div>
Patrykl
https://wiki.egi.eu/w/index.php?title=Federated_Cloud_infrastructure_status&diff=98083
Federated Cloud infrastructure status
2017-12-01T15:53:50Z
<p>Patrykl: /* Status of the Federated Cloud */</p>
<hr />
<div>{{Fedcloud_Menu}} {{TOC_right}} <br />
<br />
The purposes of this page are <br />
<br />
*providing a snapshot of the resources that are provided by the Federated Cloud infrastructure <br />
*providing information about the sites that are joining, or have expressed interest in joining the FedCloud <br />
*providing the list of sites supporting the fedcloud.egi.eu VO, which is the VO used to allow the evaluation of the FedCloud infrastructure by a given provider<br />
<br />
If you have any comments on the content of this page, please contact '''operations @ egi.eu'''. <br />
<br />
== Status of the Federated Cloud ==<br />
<br />
The table here shows all Resource Centres fully integrated into the Federated Cloud infrastructure and certified through the [[PROC09|EGI Resource Centre Registration and Certification]]. <br />
<br />
The status of all the services is monitored via the [https://argo-mon.egi.eu/nagios/ argo-mon.egi.eu nagios instance] and [https://argo-mon2.egi.eu/nagios/ argo-mon2.egi.eu nagios instance]. <br />
<br />
Details on Resource Centres availability and reliability are available on [http://argo.egi.eu/lavoisier/cloud_reports?accept=html ARGO]. <br />
<br />
Accounting data are available on the [http://accounting.egi.eu/cloud.php EGI Accounting Portal] or in the [http://accounting-devel.egi.eu/cloud.php accounting portal dev instance]. <br />
<br />
'''Last update: February 2017''' <br />
<br />
{| style="border:1px solid black; text-align:left;" class="wikitable sortable" cellspacing="0" cellpadding="5"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | Affiliation <br />
! style="border-bottom:1px solid black;" | CC <br />
! style="border-bottom:1px solid black;" | Main contact <br />
! style="border-bottom:1px solid black;" | Deputies <br />
! style="border-bottom:1px solid black;" | Host Site <br />
! style="border-bottom:1px solid black;" | Production status <br />
! style="border-bottom:1px solid black;" | CMF Version <br />
! style="border-bottom:1px solid black;" | Supporting fedcloud.egi.eu <br />
! style="border-bottom:1px solid black;" | Resource Endpoints <br />
! style="border-bottom:1px solid black;" | Committed Production Launch resources <br />
! style="border-bottom:1px solid black;" | VM maximum size (memory, cpu, storage) <br />
! style="border-bottom:1px solid black;" | IPv6 readiness of the network at the site <br />
! style="border-bottom:1px solid black;" | Last update timestamp YYYY-MM-DD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
100 Percent IT Ltd <br />
<br />
| style="border-bottom:1px dotted silver;" | UK <br />
| style="border-bottom:1px dotted silver;" | David Blundell <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | 100IT <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka&nbsp; <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://occi-api.100percentit.com:8787/occi1.1 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 120 Cores with 128 GB RAM <br />
<br />
- 16TB Shared storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | Cyfronet (NGI PL) <br />
| style="border-bottom:1px dotted silver;" | PL <br />
| style="border-bottom:1px dotted silver;" | Tomasz Szepieniec <br />
| style="border-bottom:1px dotted silver;" | Patryk Lasoń, Łukasz Flis <br />
| style="border-bottom:1px dotted silver;" | [https://next.gocdb.eu/portal/index.php?Page_Type=Site&id=966 CYFRONET-CLOUD] <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Juno <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://control.cloud.cyfronet.pl:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 200 Cores with 400 GB RAM <br />
<br />
- 5 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 cores, 16 GB of RAM, 10 GB <br />
| style="border-bottom:1px dotted silver;" | February 2018<br />
| style="border-bottom:1px dotted silver;" | 2017-12-01<br />
|-<br />
| style="border-bottom:1px dotted silver;" | FCTSG <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Rubén Díez <br />
| style="border-bottom:1px dotted silver;" | Iván Díaz <br />
| style="border-bottom:1px dotted silver;" | CESGA <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://cloud.cesga.es:3202/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | - 288 Cores with 592GB RAM (36 Xeon servers with 8 cores, 16GB RAM and local HD 500GB). <br />
- 160 Cores with 512GB RAM (8 Xeon servers with 20 cores, 64GB RAM and local HD 500GB). - Two data stores of 3TB and 700GB. This infrastructure is used to run several core services for EGI.eu and our capacity is compromised due to that. Imminent migration to a new endpoint (OpenNebula 5); this process will be gradual and due this some resources could be not immediately allocatable. <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 cores, 16GB of RAM, local HD 470 GB <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | CESNET <br />
| style="border-bottom:1px dotted silver;" | CZ <br />
| style="border-bottom:1px dotted silver;" | Miroslav Ruda <br />
| style="border-bottom:1px dotted silver;" | Boris Parak <br />
| style="border-bottom:1px dotted silver;" | CESNET-MetaCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://carach5.ics.muni.cz:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 416 Cores with 2.4 TB RAM <br />
<br />
- 56.6 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 32 cores, 185 GB of RAM, approx. 3 TB of attached storage <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | 2017-11-30<br />
|-<br />
| style="border-bottom:1px dotted silver;" | FZ Jülich <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Björn Hagemeier <br />
| style="border-bottom:1px dotted silver;" | Shahbaz Memon <br />
| style="border-bottom:1px dotted silver;" | FZJ <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack<br>Kilo <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://egi-cloud.zam.kfa-juelich.de:8787/ OCCI]<br>[https://swift.zam.kfa-juelich.de:8888/ CDMI]<br>[https://fsd-cloud.zam.kfa-juelich.de:5000/v2.0 OpenStack] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 216 Cores with 294 GB RAM <br />
<br />
- 50 TB Storage (~20TB object + ~30TB block) <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
Name:&nbsp;m16<br>16 Cores<br>16 GB RAM<br>Default Disk:&nbsp;20 (Root) + 20 (Ephemeral)<br>1TB attachable block storage<br>Open Ports: 22, 80, 443, 7000-7020 <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | GWDG <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Piotr Kasprzak <br />
| style="border-bottom:1px dotted silver;" | Ramin Yahyapour, Philipp Wieder <br />
| style="border-bottom:1px dotted silver;" | GoeGrid <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4 <br />
| style="border-bottom:1px dotted silver;" | NO <br />
| style="border-bottom:1px dotted silver;" | [https://occi.cloud.gwdg.de:3100/ OCCI]<br>[http://cdmi.cloud.gwdg.de:4001 CDMI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 192 Cores with 768 GB RAM <br />
<br />
&nbsp;- 40 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 64 cores, 240 GB RAM, 3 TB disk <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | CSIC/IFCA <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Alvaro Lopez Garcia <br />
| style="border-bottom:1px dotted silver;" | Pablo Orviz <br />
| style="border-bottom:1px dotted silver;" | IFCA-LCG2 <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | <br />
[https://cloud.ifca.es:8787/ OCCI] <br />
<br />
[https://cloud.ifca.es:8774/ OpenStack] <br />
<br />
[https://cephrgw.ifca.es:8080/ Swift] <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
- 2368 Cores ( 32 nodes x 8 vcpus x 16GB RAM -&nbsp; 36 nodes x 24 vcpus x 48GM RAM - 34 nodes x 32 vcpus x 128GB RAM - 2 nodes x 80 vcpus x 1TB RAM )<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
cm4.4xlarge: 32 vCPUs, 160GB HD, 64GB RAM <br />
<br />
x1.20xlarge: 80 vCPUS, 100GB HD, 1TB RAM (upon request) <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
GRNET<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | GR <br />
| style="border-bottom:1px dotted silver;" | Kostas Koumantaros <br />
| style="border-bottom:1px dotted silver;" | &nbsp; Kyriakos Ginis <br />
| style="border-bottom:1px dotted silver;" | HG-09-Okeanos-Cloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | Synnefo <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://okeanos-occi2.hellasgrid.gr:9000/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 70 CPUs with 220 GB RAM <br />
<br />
<br> - 2 ΤΒ storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 24 Cores, 192GB RAM, 500GB HD&nbsp; <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Miroslav Dobrucky, Jan Astalos <br />
| style="border-bottom:1px dotted silver;" | IISAS-FedCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nova.ui.savba.sk:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 168 Cores with 1.5 GB RAM per core <br />
<br />
&nbsp;- 9 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.xlarge: 8 VCPUs, 16GB RAM, 160GB HD <br />
| style="border-bottom:1px dotted silver;" | Ready<br />
| style="border-bottom:1px dotted silver;" | 2017-11-30<br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Jan Astalos, Miroslav Dobrucky <br />
| style="border-bottom:1px dotted silver;" | IISAS-GPUCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nova3.ui.savba.sk:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 48 Cores with 4GB RAM per core, 6 GPUs K20m <br />
<br />
&nbsp;- 6 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <span style="color: rgb(51, 51, 51); font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 13px; line-height: 18.5714px; background-color: rgb(245, 245, 245);">gpu2cpu12</span><span style="font-size: 13.28px; line-height: 1.5em;">: 12 VCPUs, 48GB RAM, 200GB HD, 2 GPU Tesla K20m</span> <br />
| style="border-bottom:1px dotted silver;" | Ready<br />
| style="border-bottom:1px dotted silver;" | 2017-11-30<br />
|-<br />
| style="border-bottom:1px dotted silver;" rowspan="3" | INFN - Catania <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | IT <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | Roberto Barbera <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | Giuseppe La Rocca, Diego Scardac, Giuseppe Platania <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN-CATANIA-STACK <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://stack-server-01.ct.infn.it:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 16 Cores with 65 GB RAM <br />
<br />
- 16 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | VM maximum size = m1.xlarge (16384 MB, 8 VCPU, 160 GB)<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - Bari <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Marica Antonacci <br />
| style="border-bottom:1px dotted silver;" | Giacinto Donvito, Stefano Nicotri, Vincenzo Spinoso <br />
| style="border-bottom:1px dotted silver;" | RECAS-BARI (was:PRISMA-INFN-BARI) <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [http://cloud.recas.ba.infn.it:8787/occi OCCI] (no CDMI endpoint provided at the moment, it will be back in the near future) <br />
| style="border-bottom:1px dotted silver;" | <br />
- 300 Cores with 600GB of RAM <br />
<br />
- 50 TB Storage. <br />
<br />
| style="border-bottom:1px dotted silver;" | Flavor "m1.xxlarge": 24 cores, 48GB RAM, 100 GB disk. Up to 500GB block storage can be attached on-demand. <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN-Padova <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Marco Verlato <br />
| style="border-bottom:1px dotted silver;" | Federica Fanzago, Matteo Segatta <br />
| style="border-bottom:1px dotted silver;" | [https://next.gocdb.eu/portal/index.php?Page_Type=Site&id=1024 INFN-PADOVA-STACK] <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Newton <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://egi-cloud.pd.infn.it:8787/occi1.1/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 120 Cores with 240 GB RAM <br />
<br />
- 2.2TB of overall block storage and 1.8TB of ephemeral storage per compute node <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
m1.hpc: 24 VCPUs, 46GB RAM, 160GB HD, up to 1TB attachable block storage <br />
<br />
Up to 24 public IPs <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | TUBITAK ULAKBIM <br />
| style="border-bottom:1px dotted silver;" | TR <br />
| style="border-bottom:1px dotted silver;" | Feyza Eryol <br />
| style="border-bottom:1px dotted silver;" | Onur Temizsoylu <br />
| style="border-bottom:1px dotted silver;" | TR-FC1-ULAKBIM <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [http://fcctrl.ulakbim.gov.tr:8787/occi1.2 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 168 Cores with 896 GB RAM&nbsp; <br />
<br />
- 40 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 4 CPUs, 16GB Memory, 40GB disk<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | UKIM <br />
| style="border-bottom:1px dotted silver;" | MK <br />
| style="border-bottom:1px dotted silver;" | Boro Jakimovski <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | MK-04-FINKICLOUD <br />
| style="border-bottom:1px dotted silver;" | Certified (2017-08-17) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5.2 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nebula.finki.ukim.mk:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 100 Cores with 24 GB RAM - 1 TB Storage <br />
<br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 4 CPUs, 8GB Memory, 200GB disk<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | IPHC <br />
| style="border-bottom:1px dotted silver;" | FR <br />
| style="border-bottom:1px dotted silver;" | Jerome Pansanel <br />
| style="border-bottom:1px dotted silver;" | Sebastien Geiger <br />
| style="border-bottom:1px dotted silver;" | IN2P3-IRES <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Newton<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://sbgcloud.in2p3.fr:8787 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 480 Cores with 2336 GB RAM - 160 TB storage (Cinder / CEPH)<br />
<br />
| style="border-bottom:1px dotted silver;" | m1.12xlarge-hugemem (CPU: 48, RAM: 512 GB, disk: 160 GB) <br> Monitoring&nbsp;: <br />
*https://cloudmon.egi.eu/nagios/cgi-bin/extinfo.cgi?type=1&amp;host=sbgbdii1.in2p3.fr <br />
*https://cloudmon.egi.eu/nagios/cgi-bin/extinfo.cgi?type=1&amp;host=sbgcloud.in2p3.fr<br />
<br />
| style="border-bottom:1px dotted silver;" | Planned for February 2018<br />
| style="border-bottom:1px dotted silver;" | 2017-11-30<br />
|-<br />
| style="border-bottom:1px dotted silver;" | NCG-INGRID-PT / LIP <br />
| style="border-bottom:1px dotted silver;" | PT <br />
| style="border-bottom:1px dotted silver;" | Mario David <br />
| style="border-bottom:1px dotted silver;" | Joao Pina, Joao Martins <br />
| style="border-bottom:1px dotted silver;" | NCG-INGRID-PT <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Openstack Mitaka<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
- 80 Cores with 192 GB RAM <br />
<br />
- 3 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | Name m1.xlarge, VCPUs 8, Root Disk 160 GB, Ephemeral Disk 0 GB, Total Disk 160 GB, RAM 16,384 MB <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | Polythecnic University of Valencia, I3M <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Ignacio Blanquer <br />
| style="border-bottom:1px dotted silver;" | Carlos de Alfonso <br />
| style="border-bottom:1px dotted silver;" | UPV-GRyCAP <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://fc-one.i3m.upv.es:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 128 Cores with 192 GB RAM <br />
<br />
- 5 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 4 VCPUS, 16GB RAM, 160GB HD <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | Fraunhofer SCAI <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Horst Schwichtenberg <br />
| style="border-bottom:1px dotted silver;" | Andre Gemuend <br />
| style="border-bottom:1px dotted silver;" | SCAI <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://fc.scai.fraunhofer.de:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 128 physical cores + HT, 244 GB RAM <br />
<br />
- 20 TB Storage (Glance &amp; Cinder) <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 VCPUS, 16GB RAM, 160GB HD <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | BELSPO <br />
| style="border-bottom:1px dotted silver;" | BE<br> <br />
| style="border-bottom:1px dotted silver;" | Stephane GERARD <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | BEgrid-BELNET <br />
| style="border-bottom:1px dotted silver;" | Certified (2016-12) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5 <br />
| style="border-bottom:1px dotted silver;" | YES<br> <br />
| style="border-bottom:1px dotted silver;" | [https://rocci.iihe.ac.be:11443/ OCCI]<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
- 160 physical cores + HT, 512 GB RAM <br />
<br />
- 10 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 16 VCPUs, 32GB of RAM, local HD 200 GB <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Jan Astalos, Miroslav Dobrucky <br />
| style="border-bottom:1px dotted silver;" | IISAS-Nebula <br />
| style="border-bottom:1px dotted silver;" | Certified (2017-01-25) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | [https://nebula2.ui.savba.sk:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | 32 Cores with 4GB RAM per core and 4 GPUs K20m - Storage 147GB <br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 14 CPU cores, 2GPU, 56GB of RAM, 830GB HD <br />
| style="border-bottom:1px dotted silver;" | Ready<br />
| style="border-bottom:1px dotted silver;" | 2017-11-30<br />
|-<br />
| style="border-bottom:1px dotted silver;" | IFIN-HH <br />
| style="border-bottom:1px dotted silver;" | RO <br />
| style="border-bottom:1px dotted silver;" | Ionut Vasile <br />
| style="border-bottom:1px dotted silver;" | Dragos Ciobanu-Zabet <br />
| style="border-bottom:1px dotted silver;" | CLOUDIFIN <br />
| style="border-bottom:1px dotted silver;" | Certified (2017-03-01) <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | Yes<br> <br />
| style="border-bottom:1px dotted silver;" | [http://cloud-ctrl.nipne.ro:8787/occi1.1 OCCI] <br />
| style="border-bottom:1px dotted silver;" | 96 Cores with 384 GB RAM <br>- 2TB Storage <br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.large(VCPUs 4, 8GB RAM, Root Disk 80GB)<br> m1.medium(VCPUs 2, 4GB RAM, Root Disk 40GB)<br> m1.nano(VCPUs 1, 64MB RAM)<br> m1.small(VCPUs 1, 2GB RAM, Root Disk 20GB)<br> m1.tiny(VCPUs 1, 512MB RAM, Root Disk 1GB)<br> m1.xlarge(VCPUs 8, 16GB RAM, Root Disk 160GB) <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | 2017-11-30<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | UA <br />
| style="border-bottom:1px dotted silver;" | Volodymyr Yurchenko <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | UA-BITP <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Newton <br />
| style="border-bottom:1px dotted silver;" | Yes<br> <br />
| style="border-bottom:1px dotted silver;" | [http://cloud-main.bitp.kiev.ua:8787 OCCI] <br> [https://cloud-main.bitp.kiev.ua:5001/v2.0 OpenStack] <br />
| style="border-bottom:1px dotted silver;" | Total 56 virtual cores (28 physical CPUs), 62 GB RAM, 2.9 TB storage, <br />
all available for fedcloud.egi.eu through an SLA <br> <br />
<br />
| style="border-bottom:1px dotted silver;" | Max size of VM: 8 cores, 16 GB RAM, 160 GB storage <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | INAF-TRIESTE-STACK <br />
| style="border-bottom:1px dotted silver;" | Candidate (13-11-2017) <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|}<br />
<br />
== Suspended or closed sites ==<br />
<br />
'''Last update: November 2017''' <br />
<br />
{| style="border:1px solid black; text-align:left;" class="wikitable sortable" cellspacing="0" cellpadding="5"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | Affiliation <br />
! style="border-bottom:1px solid black;" | CC <br />
! style="border-bottom:1px solid black;" | Main contact <br />
! style="border-bottom:1px solid black;" | Deputies <br />
! style="border-bottom:1px solid black;" | Host Site <br />
! style="border-bottom:1px solid black;" | Production status <br />
! style="border-bottom:1px solid black;" | CMF Version <br />
! style="border-bottom:1px solid black;" | Supporting fedcloud.egi.eu <br />
! style="border-bottom:1px solid black;" | Resource Endpoints <br />
! style="border-bottom:1px solid black;" | Committed Production Launch resources <br />
! style="border-bottom:1px solid black;" | VM maximum size (memory, cpu, storage) <br />
! style="border-bottom:1px solid black;" | IPv6 readiness of the network at the site <br />
! style="border-bottom:1px solid black;" | Last update timestamp YYY-MM-DD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
UNIZAR / BIFI <br />
<br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Ruben Valles <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | BIFI <br />
| style="border-bottom:1px dotted silver;" | Suspended (2017-08-01) <br />
| style="border-bottom:1px dotted silver;" | OpenStack (Newton / Icehouse) <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | <br />
[http://server4-ciencias.bifi.unizar.es:8787 OCCI] (Newton)<br> [http://server4-eupt.unizar.es:8787 OCCI] (Icehouse) <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
- 720 Cores with 740 GB RAM <br />
<br />
- 10 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
'''Newton''': m1.xlarge, VCPUs 8, Root Disk 20 GB, Ephemeral Disk 0 GB, Total Disk 20 GB, RAM 16,384 MB '''Icehouse''': Name m1.xlarge, VCPUs 8, Root Disk 160 GB, Ephemeral Disk 0 GB, Total Disk 160 GB, RAM 16,384 MB <br />
<br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" rowspan="3" | INFN - Catania <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | IT <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | Roberto Barbera <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | Giuseppe La Rocca, Diego Scardac, Giuseppe Platania <br />
| style="border-bottom:1px dotted silver;" | INFN-CATANIA-NEBULA <br />
| style="border-bottom:1px dotted silver;" | Suspended (2017-04-27) https://ggus.eu/index.php?mode=ticket_info&amp;ticket_id=126716 <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nebula-server-01.ct.infn.it:9000/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 16 cores, 64 GB RAM <br />
<br />
- 5.4 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | INDIGO-CATANIA-STACK <br />
| style="border-bottom:1px dotted silver;" | Closed (2017-05-02) <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | CETA-CIEMAT <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Guillermo Díaz Herrero, <br />
| style="border-bottom:1px dotted silver;" | <br />
Alfonso Pardo Diaz <br />
<br />
Abel Francisco Paz<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | CETA-GRID <br />
| style="border-bottom:1px dotted silver;" | Cloud resources dismissed (2017-06-09) <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | NCG-INGRID-PT / LIP <br />
| style="border-bottom:1px dotted silver;" | PT <br />
| style="border-bottom:1px dotted silver;" | Mario David <br />
| style="border-bottom:1px dotted silver;" | Joao Pina, Joao Martins <br />
| style="border-bottom:1px dotted silver;" | NCG-INGRID-PT <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Openstack Mitaka<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
- 80 Cores with 192 GB RAM <br />
<br />
- 3 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | Name m1.xlarge, VCPUs 8, Root Disk 160 GB, Ephemeral Disk 0 GB, Total Disk 160 GB, RAM 16,384 MB <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|}<br />
<br />
[[Category:Federated_Cloud]]</div>
Patrykl
https://wiki.egi.eu/w/index.php?title=Federated_Cloud_siteconf&diff=96840
Federated Cloud siteconf
2017-09-05T09:34:26Z
<p>Patrykl: /* Site-specific configuration */</p>
<hr />
<div>{{Fedcloud_Menu}} {{TOC_right}} <br />
<br />
The main purpose of this page is to collect site-specific configuration parameters of the Federated Cloud sites, allowing comparison among them, identify differences, get parameters for a specific site. <br />
<br />
If you have any comments on the content of this page, please contact '''operations @ egi.eu'''. <br />
<br />
Parameters provided by each site are: <br />
<br />
*'''default network name''', as the name of the network assigned by default when firing up a VM to the site; at the moment, it might be that the network is private, public or not assigned at all; example: ''/network/PRIVATE'' <br />
*'''default network type''', can be ''public'', ''private'', or ''N/A'' (not available) <br />
*'''public network name''': name of the public network to be used; usually this is different from the default network, which is private in most of the cases; example: ''/network/PUBLIC'' <br />
*'''is outgoing connectivity guaranteed by default at start time: '''please say YES if newly started VM provide directly outgoing connection, either through public IP or, if public IP is not assigned at instantiation time, through NAT (private IP enabled to outgoing connection) <br />
*'''port default firewall policy''': default policy available at infrastructure level (firewall); usually it's either "all open" or "all closed" <br />
*'''ports firewall configuration''': port configuration on top of the default firewall policy; so you can specify i.e. which ports are open on the firewall if the default configuration is "all closed"; example: ''22, ICMP open'' <br />
*'''ports default CMF policy''': on OpenStack, it is possible to open/close ports using the OpenStack user interface; these "security groups" feature is an additional firewall feature, independent from the infrastructure (low level) firewall, and can be configured by the user (using the Horizon interface) or by API, or asking for support through the EGI Helpdesk. Example: "all open" or "all closed". <br />
*'''ports policy on CMF''': if ports default CMF policy is "all closed", you may want to specify here if there are exceptions. Example: ssh. <br />
*'''mandatory closed ports''': if there are ports that cannot be opened due to local rules or national regulations or infrastructure constraints. Example: 25 is usually not available for security reasons (used 587 instead). <br />
*'''port configuration requests method''': how the site allows to fulfill port reconfiguration requests. Examples: GGUS, Horizon, other ways. <br />
*'''users requests''': please mention here any special requests come from users in the past and that you have worked in order to make a specific use case run on your site. <br />
*'''comments''': if you have any comments to report here that could help us in improving this page.<br />
<br />
= Site-specific configuration =<br />
<br />
{| style="border:1px solid black; text-align:left;" class="wikitable" cellspacing="0" cellpadding="5"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | <br> <br />
! style="border-bottom:1px solid black;" | default network name <br />
! style="border-bottom:1px solid black;" | default network type <br />
! style="border-bottom:1px solid black;" | public network name <br />
! style="border-bottom:1px solid black;" | is outgoing connectivity guaranteed by default at start time? <br />
! style="border-bottom:1px solid black;" | port default firewall policy <br />
! style="border-bottom:1px solid black;" | ports firewall configuration <br />
! style="border-bottom:1px solid black;" | ports default CMF policy <br />
! style="border-bottom:1px solid black;" | ports policy on CMF <br />
! style="border-bottom:1px solid black;" | mandatory closed ports <br />
! style="border-bottom:1px solid black;" | port configuration requests method <br />
! style="border-bottom:1px solid black;" | users requests <br />
! style="border-bottom:1px solid black;" | comments<br />
|-<br />
| style="border-bottom:1px dotted silver;" | 100IT <br />
| style="border-bottom:1px dotted silver;" | private <br />
| style="border-bottom:1px dotted silver;" | private <br />
| style="border-bottom:1px dotted silver;" | public <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all open <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | all closed <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | none <br />
| style="border-bottom:1px dotted silver;" | OpenStack Horizon, GGUS <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | BEgrid-BELNET <br />
| style="border-bottom:1px dotted silver;" | /network/1 <br />
| style="border-bottom:1px dotted silver;" | public <br />
| style="border-bottom:1px dotted silver;" | /network/1 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | all closed <br />
| style="border-bottom:1px dotted silver;" | 22, ICMP <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | GGUS ticket <br />
| style="border-bottom:1px dotted silver;" | 80, 8080, 443 <br />
| style="border-bottom:1px dotted silver;" | some users have requested to limit access to their VMs to a given list of source IPs<br />
|-<br />
| style="border-bottom:1px dotted silver;" | BIFI <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all closed <br />
| style="border-bottom:1px dotted silver;" | 22,ICMP open <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | GGUS,email <br />
| style="border-bottom:1px dotted silver;" | 8080, 8081 8888, 9443, 61616 (Training VO) to be opened <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | CESGA <br />
| style="border-bottom:1px dotted silver;" | https://fedcloud-services.egi.cesga.es:11443/network/1<br> <br />
| style="border-bottom:1px dotted silver;" | public <br />
| style="border-bottom:1px dotted silver;" | https://fedcloud-services.egi.cesga.es:11443/network/1<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | NA (no OpenStack)<br> <br />
| style="border-bottom:1px dotted silver;" | NA (no OpenStack)<br> <br />
| style="border-bottom:1px dotted silver;" | none<br> <br />
| style="border-bottom:1px dotted silver;" | GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | Static DHCP server (IP assigned if network contextualization fails)<br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | CESNET-MetaCloud <br />
| style="border-bottom:1px dotted silver;" | https://carach5.ics.muni.cz:11443/network/24<br> <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | YES<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | 67/udp, 137/udp<br> <br />
| style="border-bottom:1px dotted silver;" | GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | One request to provide a private network.<br> <br />
| style="border-bottom:1px dotted silver;" | As soon as security groups are implemented in OCCI, we will switch to a more restrictive mode where only TCP 22 is open by default. Users will have a self-service control over this via OCCI.<br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | CLOUDIFIN <br />
| style="border-bottom:1px dotted silver;" | /occi1.1/network/500ed7e7-162e-4d97-916e-bc7bc3ab9b41<br> <br />
| style="border-bottom:1px dotted silver;" | private<br> <br />
| style="border-bottom:1px dotted silver;" | /occi1.1/network/PUBLIC<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | As we well know by using occi we can create, destroy VMs, attach link networks.<br>Would it not be possible to access (ssh) VMs with private ip through occi?<br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | CYFRONET-CLOUD <br />
| style="border-bottom:1px dotted silver;" | fedcloud.egi.eu-internal-net<br> <br />
| style="border-bottom:1px dotted silver;" | private<br> <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br><br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | FZJ <br />
| style="border-bottom:1px dotted silver;" | /network/PRIVATE<br> <br />
| style="border-bottom:1px dotted silver;" | private<br> <br />
| style="border-bottom:1px dotted silver;" | /network/PUBLIC<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | all closed<br> <br />
| style="border-bottom:1px dotted silver;" | 22, 80, 443, 7000-7020 <br> <br />
| style="border-bottom:1px dotted silver;" | all closed<br> <br />
| style="border-bottom:1px dotted silver;" | all closed, except for 22, 80, 443, 7000-7020<br> <br />
| style="border-bottom:1px dotted silver;" | 25<br> <br />
| style="border-bottom:1px dotted silver;" | Openstack Horizon portal, GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | 3306, redirected to 7000; 25 (from the inside), redirected to 587.<br> <br />
| style="border-bottom:1px dotted silver;" | Ports 7000-7020 have been defined by our network security team. We have so far redirected any requests for other ports to this range. There was a debate once when users insisted on port 3306 for MySQL, however we convinced them that their client was flawed by not supporting other ports. In the same way, users expected to be able to send email via port 25, we convinced them that port 587 is intended for that purpose.<br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | GoeGrid <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | HG-09-Okeanos-Cloud <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | yes <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | Not Available<br> <br />
| style="border-bottom:1px dotted silver;" | Not Available<br> <br />
| style="border-bottom:1px dotted silver;" | None<br> <br />
| style="border-bottom:1px dotted silver;" | GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | All newly created VMs are getting a public IPv4 and public IPv6 address <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | IFCA-LCG2 <br />
| style="border-bottom:1px dotted silver;" | provider-&lt;project VLAN ID&gt;<br> <br />
| style="border-bottom:1px dotted silver;" | private<br> <br />
| style="border-bottom:1px dotted silver;" | external<br> <br />
| style="border-bottom:1px dotted silver;" | NO <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | all closed<br> <br />
| style="border-bottom:1px dotted silver;" | any<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | OpenStack Horizon, GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | IISAS-FedCloud <br />
| style="border-bottom:1px dotted silver;" | /occi1.1/network/14bd3bc2-5f1a-4948-b94e-bc95e56122e5<br> <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | /occi1.1/network/14bd3bc2-5f1a-4948-b94e-bc95e56122e5<br> <br />
| style="border-bottom:1px dotted silver;" | YES<br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | all closed <br />
| style="border-bottom:1px dotted silver;" | 22,ICMP open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Openstack Horizon portal, GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | network connections should be monitored, unusual activities (e.g. very high volumes/frequency connections) should raise alarms<br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | IISAS-Nebula <br />
| style="border-bottom:1px dotted silver;" | https://nebula2.ui.savba.sk:11443/network/1<br> <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | https://nebula2.ui.savba.sk:11443/network/1<br> <br />
| style="border-bottom:1px dotted silver;" | YES<br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | all closed <br />
| style="border-bottom:1px dotted silver;" | 22, ICMP open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | IISAS-GPUCloud <br />
| style="border-bottom:1px dotted silver;" | https://nova3.ui.savba.sk:8787/occi1.1/network/PUBLIC<br> <br />
| style="border-bottom:1px dotted silver;" | public <br />
| style="border-bottom:1px dotted silver;" | https://nova3.ui.savba.sk:8787/occi1.1/network/PUBLIC<br> <br />
| style="border-bottom:1px dotted silver;" | YES<br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | all closed <br />
| style="border-bottom:1px dotted silver;" | 22, ICMP open <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Openstack Horizon portal, GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | port 8899 by enmr.eu<br> <br />
| style="border-bottom:1px dotted silver;" | network connections should be monitored, unusual activities (e.g. very high volumes/frequency connections) should raise alarms<br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | IN2P3-IRES <br />
| style="border-bottom:1px dotted silver;" | /occi1.1/network/9a393ad0-057e-4d74-8a50-1818114caaba<br> <br />
| style="border-bottom:1px dotted silver;" | private<br> <br />
| style="border-bottom:1px dotted silver;" | /occi1.1/network/PUBLIC<br> <br />
| style="border-bottom:1px dotted silver;" | Yes<br> <br />
| style="border-bottom:1px dotted silver;" | all closed<br> <br />
| style="border-bottom:1px dotted silver;" | 22/80/443/8080 and ICMP open<br> <br />
| style="border-bottom:1px dotted silver;" | Ports 22/tcp and ICMP open by default. Users have the ability to use additional security group to open other ports.<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | 21, 25<br> <br />
| style="border-bottom:1px dotted silver;" | OpenStack for 80/443/8080, GGUS otherwise<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | user are not allowed to create / modify / delete security groups (in particular in a catch-all VO). Comment from the ticket: There is no name for the default network. In deed, with OpenStack and OOI, private networks does not have default name (like the public one). Each private network has its own ID (it is different for each project / VO.<br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN-CATANIA-STACK <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Horizon Dashboard, GGUS <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN-PADOVA-STACK <br />
| style="border-bottom:1px dotted silver;" | /occi1.1/network/&lt;UUID of the internal project network&gt;<br> <br />
| style="border-bottom:1px dotted silver;" | private <br> <br />
| style="border-bottom:1px dotted silver;" | /occi1.1/network/PUBLIC<br> <br />
| style="border-bottom:1px dotted silver;" | YES<br> <br />
| style="border-bottom:1px dotted silver;" | all closed<br> <br />
| style="border-bottom:1px dotted silver;" | 22 open<br> <br />
| style="border-bottom:1px dotted silver;" | al closed<br> <br />
| style="border-bottom:1px dotted silver;" | 22 open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | upon request: 8899 (from a given IM/EC3 server), 80 to be negotiated<br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | RECAS-BARI <br />
| style="border-bottom:1px dotted silver;" | /occi/network/fe82ef7b-4bb7-4c1e-b4ec-ec5c1b0c7333<br> <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | public_net<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | all open except port 111<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | all closed<br> <br />
| style="border-bottom:1px dotted silver;" | ssh (22) open<br> <br />
| style="border-bottom:1px dotted silver;" | none<br> <br />
| style="border-bottom:1px dotted silver;" | Horizon Dashboard, GGUS <br />
| style="border-bottom:1px dotted silver;" | several ports because fedcloud users are currently running different services: web portals and applications (80/8080,443), onedata (9443), hadoop, elasticsearch, etc.<br> <br />
| style="border-bottom:1px dotted silver;" | Finally we are configuring the private network in the new tenants with the latest version of ooi (1.1.2) that fixes a bug in the listing of networks. So now newly created tenants will also have a private network (isolated) as well as the public one (shared). We encourage you to use the private network whenever this is compatible with the architecture of the virtual infrastructure being deployed. If needed, we can provide direct access to the private network via our VPN (accessible with personal credentials).<br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | SCAI <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | Yes <br> <br />
| style="border-bottom:1px dotted silver;" | all closed<br> <br />
| style="border-bottom:1px dotted silver;" | ICMP, 22, 80, 443 open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | none<br> <br />
| style="border-bottom:1px dotted silver;" | none<br> <br />
| style="border-bottom:1px dotted silver;" | GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Temporary configuration, because prior configuration with default routed internal network (VxLan) and optional public provider network didn't work, couldn't attach floating public IP through OCCI (worked through horizon).<br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | TR-FC1-ULAKBIM <br />
| style="border-bottom:1px dotted silver;" | http://fcctrl.ulakbim.gov.tr:8787/occi1.2/network/ed61199b-baac-4524-b801-324f341b0d89 for fedcloud.egi.eu<br> <br />
| style="border-bottom:1px dotted silver;" | private <br> <br />
| style="border-bottom:1px dotted silver;" | http://fcctrl.ulakbim.gov.tr:8787/occi1.2/network/ed61199b-baac-4524-b801-324f341b0d89 <br />
http://fcctrl.ulakbim.gov.tr:8787/occi1.2/network/PUBLIC<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | yes <br />
| style="border-bottom:1px dotted silver;" | all closed <br />
| style="border-bottom:1px dotted silver;" | 22, 443, ICMP open <br />
| style="border-bottom:1px dotted silver;" | all closed<br> <br />
| style="border-bottom:1px dotted silver;" | 22, ICMP open <br> <br />
| style="border-bottom:1px dotted silver;" | None <br> <br />
| style="border-bottom:1px dotted silver;" | GGUS <br />
| style="border-bottom:1px dotted silver;" | 443 <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | UPV-GRyCAP <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | NCG-INGRID-PT <br />
| style="border-bottom:1px dotted silver;" | &lt;PROJECTNAME&gt;_private_net<br> <br />
| style="border-bottom:1px dotted silver;" | private<br> <br />
| style="border-bottom:1px dotted silver;" | public_net<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Ports 22/tcp and ICMP open by default. Users have the ability to use additional security group to open other ports.<br> <br />
| style="border-bottom:1px dotted silver;" | Horizon Dashboard, GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Horizon Dashboard, GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | MK-04-FINKICLOUD <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|}</div>
Patrykl
https://wiki.egi.eu/w/index.php?title=Federated_Cloud_siteconf&diff=96166
Federated Cloud siteconf
2017-07-20T11:25:20Z
<p>Patrykl: /* CYFRONET-CLOUD */</p>
<hr />
<div>{{Fedcloud_Menu}} {{TOC_right}} <br />
<br />
The main purpose of this page is to collect site-specific configuration parameters of the Federated Cloud sites, allowing comparison among them, identify differences, get parameters for a specific site. <br />
<br />
If you have any comments on the content of this page, please contact '''operations @ egi.eu'''.<br />
<br />
= Site-specific configuration =<br />
<br />
Parameters provided by each site are: <br />
* '''default network name''', as the name of the network assigned by default when firing up a VM to the site; at the moment, it might be that the network is private, public or not assigned at all; example: ''/network/PRIVATE''<br />
* '''default network type''', can be ''public'', ''private'', or ''N/A'' (not available)<br />
* '''public network name''': name of the public network to be used; usually this is different from the default network, which is private in most of the cases; example: ''/network/PUBLIC''<br />
* '''port default firewall policy''': default policy available at infrastructure level (firewall); usually it's either "all open" or "all closed"<br />
* '''ports firewall configuration''': port configuration on top of the default firewall policy; so you can specify i.e. which ports are open on the firewall if the default configuration is "all closed"; example: ''22, ICMP open''<br />
* '''ports default CMF policy''': on OpenStack, it is possible to open/close ports using the OpenStack user interface; these "security groups" feature is an additional firewall feature, independent from the infrastructure (low level) firewall, and can be configured by the user (using the Horizon interface) or by API, or asking for support through the EGI Helpdesk. Example: "all open" or "all closed". <br />
* '''ports policy on CMF''': if ports default CMF policy is "all closed", you may want to specify here if there are exceptions. Example: ssh. <br />
* '''mandatory closed ports''': if there are ports that cannot be opened due to local rules or national regulations or infrastructure constraints. Example: 25 is usually not available for security reasons (used 587 instead). <br />
* '''port configuration requests method''': how the site allows to fulfill port reconfiguration requests. Examples: GGUS, Horizon, other ways.<br />
* '''users requests''': please mention here any special requests come from users in the past and that you have worked in order to make a specific use case run on your site.<br />
* '''comments''': if you have any comments to report here that could help us in improving this page. <br />
<br />
== 100IT ==<br />
* '''default network name''': <br />
* '''default network type''': <br />
* '''public network name''': <br />
* '''port default firewall policy''': <br />
* '''ports firewall configuration''': <br />
* '''ports default CMF policy''': <br />
* '''ports policy on CMF''': <br />
* '''mandatory closed ports''': <br />
* '''port configuration requests method''': <br />
* '''users requests''': <br />
* '''comments''': <br />
== BEgrid-BELNET ==<br />
* '''default network name''': <br />
* '''default network type''': <br />
* '''public network name''': <br />
* '''port default firewall policy''': <br />
* '''ports firewall configuration''': <br />
* '''ports default CMF policy''': <br />
* '''ports policy on CMF''': <br />
* '''mandatory closed ports''': <br />
* '''port configuration requests method''': <br />
* '''users requests''': <br />
* '''comments''': <br />
== BIFI ==<br />
* '''default network name''': <br />
* '''default network type''': <br />
* '''public network name''': <br />
* '''port default firewall policy''': all closed<br />
* '''ports firewall configuration''': 22,ICMP open<br />
* '''ports default CMF policy''': <br />
* '''ports policy on CMF''': <br />
* '''mandatory closed ports''': <br />
* '''port configuration requests method''': GGUS,email<br />
* '''users requests''': 8080, 8081 8888, 9443, 61616 (Training VO) to be opened<br />
* '''comments''':<br />
<br />
== CESGA ==<br />
* '''default network name''': <br />
* '''default network type''': <br />
* '''public network name''': <br />
* '''port default firewall policy''': <br />
* '''ports firewall configuration''': <br />
* '''ports default CMF policy''': <br />
* '''ports policy on CMF''': <br />
* '''mandatory closed ports''': <br />
* '''port configuration requests method''': <br />
* '''users requests''': <br />
* '''comments''': <br />
== CESNET-MetaCloud ==<br />
* '''default network name''': https://carach5.ics.muni.cz:11443/network/24<br />
* '''default network type''': public<br />
* '''public network name''': = default network name<br />
* '''port default firewall policy''': all open<br />
* '''ports firewall configuration''': all open<br />
* '''ports default CMF policy''': all open<br />
* '''ports policy on CMF''': all open<br />
* '''mandatory closed ports''': 67/udp, 137/udp<br />
* '''port configuration requests method''': GGUS<br />
* '''users requests''': One request to provide a private network.<br />
* '''comments''': As soon as security groups are implemented in OCCI, we will switch to a more restrictive mode where only TCP 22 is open by default. Users will have a self-service control over this via OCCI.<br />
<br />
== CLOUDIFIN ==<br />
* '''default network name''': N/A<br />
* '''default network type''': <br />
* '''public network name''': <br />
* '''port default firewall policy''': <br />
* '''ports firewall configuration''': <br />
* '''ports default CMF policy''': <br />
* '''ports policy on CMF''': <br />
* '''mandatory closed ports''': <br />
* '''port configuration requests method''': <br />
* '''users requests''': <br />
* '''comments''':<br />
<br />
== CYFRONET-CLOUD ==<br />
* '''default network name''': fedcloud.egi.eu-internal-net<br />
* '''default network type''': private<br />
* '''public network name''': public<br />
* '''port default firewall policy''': all open<br />
* '''ports firewall configuration''': all open<br />
* '''ports default CMF policy''': all open<br />
* '''ports policy on CMF''': all open<br />
* '''mandatory closed ports''': <br />
* '''port configuration requests method''': GGUS<br />
* '''users requests''': <br />
* '''comments''':<br />
<br />
== FZJ ==<br />
* '''default network name''': <br />
* '''default network type''': <br />
* '''public network name''': <br />
* '''port default firewall policy''': all closed<br />
* '''ports firewall configuration''': 22, 80, 443, 7000-7020 <br />
* '''ports default CMF policy''': <br />
* '''ports policy on CMF''': <br />
* '''mandatory closed ports''': <br />
* '''port configuration requests method''': Openstack Horizon portal, GGUS<br />
* '''users requests''': 3306, redirected to 7000; 25 (from the inside), redirected to 587.<br />
* '''comments''': Ports 7000-7020 have been defined by our network security team. We have so far redirected any requests for other ports to this range. There was a debate once when users insisted on port 3306 for MySQL, however we convinced them that their client was flawed by not supporting other ports. In the same way, users expected to be able to send email via port 25, we convinced them that port 587 is intended for that purpose.<br />
<br />
== GoeGrid ==<br />
* '''default network name''': <br />
* '''default network type''': <br />
* '''public network name''': <br />
* '''port default firewall policy''': <br />
* '''ports firewall configuration''': <br />
* '''ports default CMF policy''': <br />
* '''ports policy on CMF''': <br />
* '''mandatory closed ports''': <br />
* '''port configuration requests method''': <br />
* '''users requests''': <br />
* '''comments''': <br />
== HG-09-Okeanos-Cloud ==<br />
* '''default network name''': <br />
* '''default network type''': <br />
* '''public network name''': <br />
* '''port default firewall policy''': <br />
* '''ports firewall configuration''': <br />
* '''ports default CMF policy''': <br />
* '''ports policy on CMF''': <br />
* '''mandatory closed ports''': <br />
* '''port configuration requests method''': <br />
* '''users requests''': <br />
* '''comments''': <br />
== IFCA-LCG2 ==<br />
* '''default network name''': <br />
* '''default network type''': <br />
* '''public network name''': <br />
* '''port default firewall policy''': all open<br />
* '''ports firewall configuration''': <br />
* '''ports default CMF policy''': all closed<br />
* '''ports policy on CMF''': ICMP open<br />
* '''mandatory closed ports''': <br />
* '''port configuration requests method''': OpenStack Horizon, GGUS<br />
* '''users requests''': <br />
* '''comments''':<br />
<br />
== IISAS-FedCloud ==<br />
* '''default network name''': <br />
* '''default network type''': <br />
* '''public network name''': <br />
* '''port default firewall policy''': all closed<br />
* '''ports firewall configuration''': 22,ICMP open<br />
* '''ports default CMF policy''': <br />
* '''ports policy on CMF''': <br />
* '''mandatory closed ports''': <br />
* '''port configuration requests method''': Openstack Horizon portal<br />
* '''users requests''': port 8899 by enmr.eu<br />
* '''comments''': network connections should be monitored, unusual activities (e.g. very high volumes/frequency connections) should raise alarms<br />
<br />
== IISAS-Nebula ==<br />
* '''default network name''': <br />
* '''default network type''': <br />
* '''public network name''': <br />
* '''port default firewall policy''': all closed<br />
* '''ports firewall configuration''': 22,ICMP open<br />
* '''ports default CMF policy''': <br />
* '''ports policy on CMF''': <br />
* '''mandatory closed ports''': <br />
* '''port configuration requests method''': Openstack Horizon portal<br />
* '''users requests''': port 8899 by enmr.eu<br />
* '''comments''': network connections should be monitored, unusual activities (e.g. very high volumes/frequency connections) should raise alarms<br />
<br />
== IISAS-GPUCloud ==<br />
* '''default network name''': <br />
* '''default network type''': <br />
* '''public network name''': <br />
* '''port default firewall policy''': all closed<br />
* '''ports firewall configuration''': 22,ICMP open<br />
* '''ports default CMF policy''': <br />
* '''ports policy on CMF''': <br />
* '''mandatory closed ports''': <br />
* '''port configuration requests method''': Openstack Horizon portal<br />
* '''users requests''': port 8899 by enmr.eu<br />
* '''comments''': network connections should be monitored, unusual activities (e.g. very high volumes/frequency connections) should raise alarms<br />
<br />
== IN2P3-IRES ==<br />
* '''default network name''': <br />
* '''default network type''': private<br />
* '''public network name''': PUBLIC<br />
* '''port default firewall policy''': all closed<br />
* '''ports firewall configuration''': 22/80/443/8080 and ICMP open<br />
* '''ports default CMF policy''': Ports 22/tcp and ICMP open by default. Users have the ability to use additional security group to open other ports.<br />
* '''ports policy on CMF''':<br />
* '''mandatory closed ports''': 21, 25<br />
* '''port configuration requests method''': OpenStack for 80/443/8080, GGUS otherwise<br />
* '''users requests''': <br />
* '''comments''': user are not allowed to create / modify / delete security groups (in particular in a catch-all VO).<br />
<br />
== INFN-CATANIA-STACK ==<br />
* '''default network name''': <br />
* '''default network type''': <br />
* '''public network name''': <br />
* '''port default firewall policy''': <br />
* '''ports firewall configuration''': <br />
* '''ports default CMF policy''': <br />
* '''ports policy on CMF''': <br />
* '''mandatory closed ports''': <br />
* '''port configuration requests method''': <br />
* '''users requests''': <br />
* '''comments''': <br />
== INFN-PADOVA-STACK ==<br />
* '''default network name''': <br />
* '''default network type''': <br />
* '''public network name''': <br />
* '''port default firewall policy''': all closed<br />
* '''ports firewall configuration''': 22 open<br />
* '''ports default CMF policy''': <br />
* '''ports policy on CMF''': <br />
* '''mandatory closed ports''': <br />
* '''port configuration requests method''': GGUS<br />
* '''users requests''': 80, 8899 (from upv.es servers only, it was needed for users who wanted to use IM and/or EC3, now has been closed due to the CRITICAL vulnerability announced the 12th of October 2016<br />
* '''comments''': here i am talking about ports open at the institute firewall level (as pointed out by Jerome's mail), while in principle every user should be able via OCCI [1] to open the desired ports by setting up his own OpenStack security group for his VM. Of course if he opens via security groups e.g. port 443 but at institute firewall level port 443 is closed, he'll not be able to reach port 443 from outside INFN-PADOVA<br />
<br />
== RECAS-BARI ==<br />
* '''default network name''': <br />
* '''default network type''': public<br />
* '''public network name''': <br />
* '''port default firewall policy''': all open<br />
* '''ports firewall configuration''': all open<br />
* '''ports default CMF policy''': <br />
* '''ports policy on CMF''': Horizon Dashboard, GGUS<br />
* '''mandatory closed ports''': <br />
* '''port configuration requests method''': <br />
* '''users requests''': several ports because fedcloud users are currently running different services: web portals and applications (80/8080,443), onedata (9443), hadoop, elasticsearch, etc.<br />
* '''comments''':<br />
<br />
== SCAI ==<br />
* '''default network name''': <br />
* '''default network type''': <br />
* '''public network name''': <br />
* '''port default firewall policy''': <br />
* '''ports firewall configuration''': <br />
* '''ports default CMF policy''': <br />
* '''ports policy on CMF''': <br />
* '''mandatory closed ports''': <br />
* '''port configuration requests method''': <br />
* '''users requests''': <br />
* '''comments''': <br />
== TR-FC1-ULAKBIM ==<br />
* '''default network name''': <br />
* '''default network type''': <br />
* '''public network name''': <br />
* '''port default firewall policy''': all closed<br />
* '''ports firewall configuration''': 22/443 open by default<br />
* '''ports default CMF policy''': <br />
* '''ports policy on CMF''': <br />
* '''mandatory closed ports''': <br />
* '''port configuration requests method''': GGUS<br />
* '''users requests''': <br />
* '''comments''':<br />
<br />
== UPV-GRyCAP ==<br />
* '''default network name''': <br />
* '''default network type''': <br />
* '''public network name''': <br />
* '''port default firewall policy''': <br />
* '''ports firewall configuration''': <br />
* '''ports default CMF policy''': <br />
* '''ports policy on CMF''': <br />
* '''mandatory closed ports''': <br />
* '''port configuration requests method''': <br />
* '''users requests''': <br />
* '''comments''':<br />
<br />
== NCG-INGRID-PT ==<br />
* '''default network name''': <br />
* '''default network type''': <br />
* '''public network name''': <br />
* '''port default firewall policy''': <br />
* '''ports firewall configuration''': <br />
* '''ports default CMF policy''': <br />
* '''ports policy on CMF''': <br />
* '''mandatory closed ports''': <br />
* '''port configuration requests method''': <br />
* '''users requests''': <br />
* '''comments''':</div>
Patrykl
https://wiki.egi.eu/w/index.php?title=Federated_Cloud_infrastructure_status&diff=87446
Federated Cloud infrastructure status
2016-05-09T16:53:38Z
<p>Patrykl: /* Status of the Federated Cloud */</p>
<hr />
<div>{{Fedcloud_Menu}} {{TOC_right}} <br />
[[Category:Federated_Cloud]]<br />
<br />
The purposes of this page are<br />
<br />
* providing a snapshot of the resources that are provided by the Federated Cloud infrastructure<br />
* providing information about the sites that are joining, or have expressed interest in joining the FedCloud<br />
* providing the list of sites supporting the fedcloud.egi.eu VO, which is the VO used to allow the evaluation of the FedCloud infrastructure by a given provider<br />
<br />
If you have any comments on the content of this page, please contact '''operations @ egi.eu'''.<br />
<br />
== Status of the Federated Cloud ==<br />
<br />
The table here shows all Resource Centres fully integrated into the Federated Cloud infrastructure and certified through the [[PROC09|EGI Resource Centre Registration and Certification]]. <br />
<br />
The status of all the services is monitored via the [https://cloudmon.egi.eu/nagios/ cloudmon.egi.eu nagios instance]. <br />
<br />
Details on Resource Centres availability and reliability are available on [http://argo.egi.eu/lavoisier/cloud_reports?accept=html ARGO].<br />
<br />
Accounting data are available on the [http://accounting.egi.eu/cloud.php EGI Accounting Portal] or in the [http://accounting-devel.egi.eu/cloud.php accounting portal dev instance].<br />
<br />
'''Last update: May 2016'''<br />
<br />
{| cellspacing="0" cellpadding="5" class="wikitable" style="border:1px solid black; text-align:left;"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | Affiliation <br />
! style="border-bottom:1px solid black;" | CC <br />
! style="border-bottom:1px solid black;" | Main contact <br />
! style="border-bottom:1px solid black;" | Deputies <br />
! style="border-bottom:1px solid black;" | Host Site <br />
! style="border-bottom:1px solid black;" | CMF <br />
! style="border-bottom:1px solid black;" | Supporting fedcloud.egi.eu<br />
! style="border-bottom:1px solid black;" | Resource Endpoints <br />
! style="border-bottom:1px solid black;" | Committed Production Launch resources <br />
! style="border-bottom:1px solid black;" | VM maximum size (memory, cpu, storage)<br />
|-<br />
| style="border-bottom:1px dotted silver;" | 100 Percent IT Ltd <br />
| style="border-bottom:1px dotted silver;" | UK <br />
| style="border-bottom:1px dotted silver;" | David Blundell <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | 100IT <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://occi-api.100percentit.com:8787/occi1.1 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 120 Cores with 128 GB RAM <br />
<br />
- 16TB Shared storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | UNIZAR / BIFI <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Ruben Valles <br />
| style="border-bottom:1px dotted silver;" | Carlos Gimeno <br />
| style="border-bottom:1px dotted silver;" | BIFI <br />
| style="border-bottom:1px dotted silver;" | OpenStack (Grizzly &amp; Icehouse) <br />
| style="border-bottom:1px dotted silver;" | YES<br />
| style="border-bottom:1px dotted silver;" | <br />
[http://server4-epsh.unizar.es:8787 OCCI] (Grizzly)<br> [http://server4-eupt.unizar.es:8787 OCCI] (Icehouse) <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
- 720 Cores with 740 GB RAM <br />
<br />
- 36 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
Name m1.xlarge, VCPUs 8, Root Disk 160 GB, Ephemeral Disk 0 GB, Total Disk 160 GB, RAM 16,384 MB <br />
<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Cyfronet (NGI PL) <br />
| style="border-bottom:1px dotted silver;" | PL <br />
| style="border-bottom:1px dotted silver;" | Tomasz Szepieniec <br />
| style="border-bottom:1px dotted silver;" | Patryk Lasoń, Łukasz Flis <br />
| style="border-bottom:1px dotted silver;" | [https://next.gocdb.eu/portal/index.php?Page_Type=Site&id=966 CYFRONET-CLOUD] <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://control.cloud.cyfronet.pl:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 200 Cores with 400 GB RAM <br />
<br />
- 5 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 cores, 16 GB of RAM, 10 GB<br />
|-<br />
| style="border-bottom:1px dotted silver;" | FCTSG <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Rubén Díez <br />
| style="border-bottom:1px dotted silver;" | Iván Díaz <br />
| style="border-bottom:1px dotted silver;" | CESGA <br />
| style="border-bottom:1px dotted silver;" | OpenNebula <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://cloud.cesga.es:3202/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | - 288 Cores with 592GB RAM (36 Xeon servers with 8 cores, 16GB RAM and local HD 500GB).<br />
- 160 Cores with 512GB RAM (8 Xeon servers with 20 cores, 64GB RAM and local HD 500GB).<br />
- Two data stores of 3TB and 700GB.<br />
This infrastructure is used to run several core services for EGI.eu and our capacity is compromised due to that. <br />
| style="border-bottom:1px dotted silver;" | 8 cores, 16GB of RAM, local HD 470 GB<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CESNET <br />
| style="border-bottom:1px dotted silver;" | CZ <br />
| style="border-bottom:1px dotted silver;" | Miroslav Ruda <br />
| style="border-bottom:1px dotted silver;" | Boris Parak <br />
| style="border-bottom:1px dotted silver;" | CESNET-MetaCloud <br />
| style="border-bottom:1px dotted silver;" | OpenNebula <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://carach5.ics.muni.cz:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 416 Cores with 2.4 TB RAM <br />
<br />
- 56.6 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 32 cores, 185 GB of RAM, approx. 3 TB of attached storage<br />
|-<br />
| style="border-bottom:1px dotted silver;" | FZ Jülich <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Björn Hagemeier <br />
| style="border-bottom:1px dotted silver;" | Shahbaz Memon <br />
| style="border-bottom:1px dotted silver;" | FZJ <br />
| style="border-bottom:1px dotted silver;" | OpenStack<br>(Icehouse) <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://egi-cloud.zam.kfa-juelich.de:8787/ OCCI]<br>[https://swift.zam.kfa-juelich.de:8888/ CDMI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 216 Cores with 294 GB RAM <br />
<br />
- 50 TB Storage (~20TB object + ~30TB block)<br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
Name:&nbsp;m16<br>16 Cores<br>16 GB RAM<br>Default Disk:&nbsp;20 (Root) + 20 (Ephemeral)<br>1TB attachable block storage <br />
<br />
|-<br />
| style="border-bottom:1px dotted silver;" | GWDG <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Piotr Kasprzak <br />
| style="border-bottom:1px dotted silver;" | Ramin Yahyapour, Philipp Wieder <br />
| style="border-bottom:1px dotted silver;" | GoeGrid <br />
| style="border-bottom:1px dotted silver;" | OpenNebula <br />
| style="border-bottom:1px dotted silver;" | NO <br />
| style="border-bottom:1px dotted silver;" | [https://occi.cloud.gwdg.de:3100/ OCCI]<br>[http://cdmi.cloud.gwdg.de:4001 CDMI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 192 Cores with 768 GB RAM <br />
<br />
&nbsp;- 40 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 64 cores, 240 GB RAM, 3 TB disk<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CSIC/IFCA <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Alvaro Lopez Garcia <br />
| style="border-bottom:1px dotted silver;" | Enol Fernandez, Pablo Orviz <br />
| style="border-bottom:1px dotted silver;" | IFCA-LCG2 <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://cloud.ifca.es:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 2288 Cores ( 32 nodes x 8 vcpus x 16GB RAM -&nbsp; 36 nodes x 24 vcpus x 48GM RAM - 34 nodes x 32 vcpus x 128GB RAM - 1 node x 80 vcpus x 1TB RAM )<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
m1.xlarge: 8 VCPUs, 160GB HD, 14GB RAM <br />
<br />
|-<br />
| style="border-bottom:1px dotted silver;" | GRNET <br />
| style="border-bottom:1px dotted silver;" | GR <br />
| style="border-bottom:1px dotted silver;" | Kostas Koumantaros <br />
| style="border-bottom:1px dotted silver;" | Nikolaos Nikoloutsakos, Kyriakos Ginis<br />
| style="border-bottom:1px dotted silver;" | HG-09-Okeanos-Cloud <br />
| style="border-bottom:1px dotted silver;" | Synnefo <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://okeanos-occi2.hellasgrid.gr:9000/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 70 CPUs with 220 GB RAM<br />
<br />
<br />
- 1 ΤΒ storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 24 Cores, 192GB RAM, 500GB HD&nbsp;<br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Miroslav Dobrucky <br />
| style="border-bottom:1px dotted silver;" | IISAS-FedCloud <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nova2.ui.savba.sk:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 176 Cores with 3GB RAM <br />
<br />
&nbsp;- 50 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.xlarge: 8 VCPUs, 16GB RAM, 160GB HD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Miroslav Dobrucky <br />
| style="border-bottom:1px dotted silver;" | IISAS-GPUCloud <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nova3.ui.savba.sk:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 96 Cores with 4GB RAM per core, 12 GPUs K20m <br />
<br />
&nbsp;- 6 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <span style="color: rgb(51, 51, 51); font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 13px; line-height: 18.5714px; background-color: rgb(245, 245, 245);">gpu2cpu12</span><span style="font-size: 13.28px; line-height: 1.5em;">: 12 VCPUs, 48GB RAM, 200GB HD, 2GPU Tesla K20m</span><br />
|-<br />
| rowspan="3" style="border-bottom:1px dotted silver;" | INFN - Catania <br />
| rowspan="3" style="border-bottom:1px dotted silver;" | IT <br />
| rowspan="3" style="border-bottom:1px dotted silver;" | Roberto Barbera <br />
| rowspan="3" style="border-bottom:1px dotted silver;" | Giuseppe La Rocca, Diego Scardaci <br />
| style="border-bottom:1px dotted silver;" | INFN-CATANIA-NEBULA <br />
| style="border-bottom:1px dotted silver;" | OpenNebula <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nebula-server-01.ct.infn.it:9000/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 16 cores, 64 GB RAM <br />
<br />
- 5.4 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN-CATANIA-STACK <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | YES<br />
| style="border-bottom:1px dotted silver;" | [https://stack-server-01.ct.infn.it:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 16 Cores with 65 GB RAM <br />
<br />
- 16 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | VM maximum size = m1.xlarge (16384 MB, 8 VCPU, 160 GB)<br />
|-<br />
| style="border-bottom:1px dotted silver;" | INDIGO-CATANIA-STACK <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [http://stack-server-02.ct.infn.it:8787 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 80 Cores with 320 GB RAM <br />
<br />
- 3TB object storage, 2TB block storage<br />
<br />
| style="border-bottom:1px dotted silver;" | VM maximum size = m1.xlarge (featuring 16GB RAM, 160 GB hard disk, 8CPU)<br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - Bari <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Marica Antonacci <br />
| style="border-bottom:1px dotted silver;" | Giacinto Donvito, Stefano Nicotri, Vincenzo Spinoso <br />
| style="border-bottom:1px dotted silver;" | RECAS-BARI (was:PRISMA-INFN-BARI)<br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [http://cloud.recas.ba.infn.it:8787/occi OCCI] (no CDMI endpoint provided at the moment, it will be back in the near future)<br />
| style="border-bottom:1px dotted silver;" | <br />
- 300 Cores with 600GB of RAM <br />
<br />
- 50 TB Storage. <br />
<br />
| style="border-bottom:1px dotted silver;" | Flavor "m1.xxlarge": 24 cores, 48GB RAM, 100 GB disk. Up to 500GB block storage can be attached on-demand.<br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN-PADOVA <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Marco Verlato <br />
| style="border-bottom:1px dotted silver;" | Federica Fanzago <br />
| style="border-bottom:1px dotted silver;" | INFN-PADOVA-STACK <br />
| style="border-bottom:1px dotted silver;" | OpenStack Juno <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://egi-cloud.pd.infn.it:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 144 Cores with 283 GB RAM <br />
<br />
- 3.7TB of block storage (max 1TB per tenant) and 1.8TB of ephemeral storage per compute node<br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
hpc: 24 VCPUs, 46GB RAM, 480GB HD, up to 1TB attachable block storage <br />
<br />
Up to 24 public IPs <br />
<br />
|-<br />
| style="border-bottom:1px dotted silver;" | TUBITAK ULAKBIM <br />
| style="border-bottom:1px dotted silver;" | TR <br />
| style="border-bottom:1px dotted silver;" | Feyza Eryol <br />
| style="border-bottom:1px dotted silver;" | Onur Temizsoylu <br />
| style="border-bottom:1px dotted silver;" | TR-FC1-ULAKBIM <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | NO <br />
| style="border-bottom:1px dotted silver;" | [http://fcctrl.ulakbim.gov.tr:9999/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 336 Cores with 672 GB RAM&nbsp; <br />
<br />
- 40 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | UKIM <br />
| style="border-bottom:1px dotted silver;" | MK <br />
| style="border-bottom:1px dotted silver;" | Boro Jakimovski <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | MK-04-FINKICLOUD <br />
| style="border-bottom:1px dotted silver;" | OpenNebula<br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nebula.finki.ukim.mk:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 100 Cores with 24 GB RAM <br />
- 1 TB Storage <br />
<br />
'''Information for MK-04-FINKICLOUD may be outdated.'''<br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | CETA-CIEMAT <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Guillermo Díaz Herrero, <br />
| style="border-bottom:1px dotted silver;" | Miguel Ángel Díaz, Abel Paz <br />
| style="border-bottom:1px dotted silver;" | CETA-GRID <br />
| style="border-bottom:1px dotted silver;" | OpenStack IceHouse <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://controller.ceta-ciemat.es:8787 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 184 Cores with 224 GB RAM <br />
<br />
- 5.3 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.xxlarge: 8 VCPUs, 12GB RAM, 40GB HD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | IPHC <br />
| style="border-bottom:1px dotted silver;" | FR <br />
| style="border-bottom:1px dotted silver;" | Jerome Pansanel <br />
| style="border-bottom:1px dotted silver;" | Sebastien Geiger, Vincent Legoll <br />
| style="border-bottom:1px dotted silver;" | IN2P3-IRES <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://sbgcloud.in2p3.fr:8787 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 192 Cores with 1232 GB RAM <br />
<br />
- 5 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.2xlarge (CPU: 16, RAM: 32 GB, disk: 320 GB) <br> Monitoring&nbsp;: <br />
*https://cloudmon.egi.eu/nagios/cgi-bin/extinfo.cgi?type=1&amp;host=sbgbdii1.in2p3.fr <br />
*https://cloudmon.egi.eu/nagios/cgi-bin/extinfo.cgi?type=1&amp;host=sbgcloud.in2p3.fr<br />
<br />
|-<br />
| style="border-bottom:1px dotted silver;" | NCG-INGRID-PT / LIP <br />
| style="border-bottom:1px dotted silver;" | PT <br />
| style="border-bottom:1px dotted silver;" | Mario David <br />
| style="border-bottom:1px dotted silver;" | Joao Pina, Joao Martins <br />
| style="border-bottom:1px dotted silver;" | NCG-INGRID-PT <br />
| style="border-bottom:1px dotted silver;" | Openstack <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://aurora.ncg.ingrid.pt:8787 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 80 Cores with 192 GB RAM <br />
<br />
- 3 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | Name m1.xlarge, VCPUs 8, Root Disk 160 GB, Ephemeral Disk 0 GB, Total Disk 160 GB, RAM 16,384 MB<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Polythecnic University of Valencia, I3M <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Ignacio Blanquer <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | UPV-GRyCAP <br />
| style="border-bottom:1px dotted silver;" | OpenNebula <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://fc-one.i3m.upv.es:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 128 Cores with 192 GB RAM <br />
<br />
- 5 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 4 VCPUS, 16GB RAM, 160GB HD<br />
<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Fraunhofer SCAI<br />
| style="border-bottom:1px dotted silver;" | DE<br />
| style="border-bottom:1px dotted silver;" | Horst Schwichtenberg<br />
| style="border-bottom:1px dotted silver;" | Andre Gemuend<br />
| style="border-bottom:1px dotted silver;" | SCAI<br />
| style="border-bottom:1px dotted silver;" | OpenStack<br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://fc.scai.fraunhofer.de:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 128 physical cores + HT, 244 GB RAM <br />
<br />
- 20 TB Storage (Glance & Cinder)<br />
<br />
| style="border-bottom:1px dotted silver;" | 8 VCPUS, 16GB RAM, 160GB HD<br />
|}<br />
<br />
== Integrating resource providers ==<br />
<br />
Last update: may 2015<br />
<br />
Sites that have a valid GOCDB entry, should also have at least one service type listed and monitored via cloudmon.egi.eu. <br />
<br />
{| cellspacing="0" cellpadding="5" style="border:1px solid black; text-align:left;" class="wikitable sortable"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | Affiliation <br />
! style="border-bottom:1px solid black;" | CC <br />
! style="border-bottom:1px solid black;" | Main Contact <br />
! style="border-bottom:1px solid black;" | Deputies <br />
! style="border-bottom:1px solid black;" | Host Site <br />
! style="border-bottom:1px solid black;" | CMF <br />
! style="border-bottom:1px solid black;" | Certification <br />
! style="border-bottom:1px solid black;" | Resource Endpoints <br />
! style="border-bottom:1px solid black;" | Comment<br />
|-<br />
| style="border-bottom:1px dotted silver;" | KISTI <br />
| style="border-bottom:1px dotted silver;" | KR <br />
| style="border-bottom:1px dotted silver;" | Soonwook Hwang <br />
| style="border-bottom:1px dotted silver;" | Sangwan Kim, Taesang Huh, Jae-Hyuck Kwak <br />
| style="border-bottom:1px dotted silver;" | KR-KISTI-CLOUD<br />
| style="border-bottom:1px dotted silver;" | OpenStack<br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://fccont.kisti.re.kr:8787/ OCCI]<br />
| style="border-bottom:1px dotted silver;" | 64 cores with 256 GB RAM and 6TB HDD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CSC <br />
| style="border-bottom:1px dotted silver;" | FI <br />
| style="border-bottom:1px dotted silver;" | Jura Tarus <br />
| style="border-bottom:1px dotted silver;" | Luís Alves, Ulf Tigerstedt, Kalle Happonen <br />
| style="border-bottom:1px dotted silver;" | CSC-Cloud<br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | Status: Testing resource integration<br />
|}<br />
<br />
== Interested resource providers ==<br />
<br />
{| cellspacing="0" cellpadding="5" style="border:1px solid black; text-align:left;" class="wikitable sortable"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | Affiliation <br />
! style="border-bottom:1px solid black;" | CC <br />
! style="border-bottom:1px solid black;" | Representative <br />
! style="border-bottom:1px solid black;" | Deputies <br />
! style="border-bottom:1px solid black;" | CMF <br />
! style="border-bottom:1px solid black;" | Integration plans <br />
! style="border-bottom:1px solid black;" | Comment<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CSIC-EBD <br />
| style="border-bottom:1px dotted silver;" | ES<br />
| style="border-bottom:1px dotted silver;" | Jesús Marco<br />
| style="border-bottom:1px dotted silver;" | Fernando Aguilar, Juan Carlos Sexto <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | to integrate before 30th June<br />
| style="border-bottom:1px dotted silver;" | ~1000 cores (500 cores initially available in FedCloud), 1PB of storage (around 50% devoted to support FedCloud)<br />
|-<br />
| style="border-bottom:1px dotted silver;" | IICT-BAS <br />
| style="border-bottom:1px dotted silver;" | BG <br />
| style="border-bottom:1px dotted silver;" | Emanouil Atanassov <br />
| style="border-bottom:1px dotted silver;" | Todor Gurov <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - Napoli <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Silvio Pardi <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - CNAF <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Cristina Aiftimiei <br />
| style="border-bottom:1px dotted silver;" | Davide Salomoni, Diego Michelotto <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - Torino <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Andrea Guarise <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | CIEMAT <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Rubio Montero <br />
| style="border-bottom:1px dotted silver;" | Rafael Mayo García, Manuel Aurelio Rodríguez Pascual <br />
| style="border-bottom:1px dotted silver;" | OpenNebula <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | SURFsara <br />
| style="border-bottom:1px dotted silver;" | NL <br />
| style="border-bottom:1px dotted silver;" | Ron Trompert <br />
| style="border-bottom:1px dotted silver;" | Maurice Bouwhuis, Machiel Jansen <br />
| style="border-bottom:1px dotted silver;" | OpenNebula <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | ISRGrid/IUCC <br />
| style="border-bottom:1px dotted silver;" | IL <br />
| style="border-bottom:1px dotted silver;" | Yossi Baruch <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | DESY <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Patrick Furhmann <br />
| style="border-bottom:1px dotted silver;" | Paul Millar <br />
| style="border-bottom:1px dotted silver;" | dCache <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | Storage only<br />
|-<br />
| style="border-bottom:1px dotted silver;" | STFC/RAL <br />
| style="border-bottom:1px dotted silver;" | UK <br />
| style="border-bottom:1px dotted silver;" | Ian Collier <br />
| style="border-bottom:1px dotted silver;" | Frazer Barnsley, Alan Kyffin <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | STFC/RAL Harwell Science <br />
| style="border-bottom:1px dotted silver;" | UK <br />
| style="border-bottom:1px dotted silver;" | Jens Jensen <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | Castor <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | Cloud storage only<br />
|-<br />
| style="border-bottom:1px dotted silver;" | IFAE / PIC <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Victor Mendez <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | SAGrid <br />
| style="border-bottom:1px dotted silver;" | ZA <br />
| style="border-bottom:1px dotted silver;" | Bruce Becker <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | SRCE <br />
| style="border-bottom:1px dotted silver;" | HR <br />
| style="border-bottom:1px dotted silver;" | Emir Imamagic <br />
| style="border-bottom:1px dotted silver;" | Luko Gjenero <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | Planned - April 2014 <br />
| style="border-bottom:1px dotted silver;" | Status: Deploying OpenStack cluster, investigating storage options<br />
|-<br />
| style="border-bottom:1px dotted silver;" | GridPP <br />
| style="border-bottom:1px dotted silver;" | UK <br />
| style="border-bottom:1px dotted silver;" | Adam Huffman <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | Hosted at Imperial College<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CRNS/IN2P3-LAL <br />
| style="border-bottom:1px dotted silver;" | FR <br />
| style="border-bottom:1px dotted silver;" | Michel Jouvin <br />
| style="border-bottom:1px dotted silver;" | Mohammed Araj <br />
| style="border-bottom:1px dotted silver;" | StratusLab <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|}</div>
Patrykl
https://wiki.egi.eu/w/index.php?title=MAN10&diff=86936
MAN10
2016-04-06T15:26:42Z
<p>Patrykl: </p>
<hr />
<div>{{Template:Op menubar}} {{Template:Doc_menubar}} {{TOC_right}} <br />
<br />
{{Ops_procedures<br />
|Doc_title = Cloud Resource Centre Installation Manual<br />
|Doc_link = [[MAN09|https://wiki.egi.eu/wiki/MAN10]]<br />
|Version = 11 March 2016<br />
|Policy_acronym = OMB<br />
|Policy_name = Operations Management Board<br />
|Contact_group = operations-support@mailman.egi.eu<br />
|Doc_status = DRAFT<br />
|Approval_date = <br />
|Procedure_statement = This manual provides information on how to set up a Resource Centre providing cloud resources in the EGI infrastructure.<br />
}} <br />
<br />
<br />
= Common prerequirements and documentation =<br />
<br />
General minimal requirements are:<br />
<br />
* Very minimal hardware is required to join. Hardware requirements depend on: <br />
** the cloud stack you use <br />
** the amount of resources you want to make available<br />
** the number of users/use cases you want to support<br />
*Servers need to authenticate each other in the EGI Federated Cloud context; this is fulfilled using X.509 certificates, so a Resource Centre should be able to obtain server certificates for some services. <br />
*User and research communities are called Virtual Organisations (VO). At least support for 3 VOs is needed to join as a Resource Centre: <br />
** ops and dteam, used for operational purposes as per RC OLA<br />
** fedcloud.egi.eu: this VO provides resources for application prototyping and validation<br />
*The operating systems supported by the EGI Federated Cloud Management Framework are: <br />
** Scientific Linux 6, CentOS7 (and in general RHEL-compatible)<br />
** Ubuntu (and in general Debian-based)<br />
<br />
In order to configure Virtual Organisations and private image lists, please consider the following guides to: <br />
<br />
* [[HOWTO16|enable a Virtual Organisation on a EGI Federated Cloud site]]<br />
* [https://wiki.appdb.egi.eu/main:faq:how_to_get_access_to_vo-wide_image_lists get access to VO wide image lists]<br />
* [https://wiki.appdb.egi.eu/main:faq:how_to_subscribe_to_a_private_image_list_using_the_vmcatcher subscribe to a private image list]<br />
<br />
= Integrating OpenStack =<br />
<br />
Integration with FedCloud requires a working OpenStack installation as a prerequirement (see http://docs.openstack.org/ for details). There are packages ready to use for most distributions (check for example [https://www.rdoproject.org/ RDO] for RedHat based distributions). <br />
<br />
OpenStack integration with FedCloud is known to work with the following versions of OpenStack:<br />
<br />
* ''Havana'' (EOL by OpenStack)<br />
* ''Icehouse'' (EOL by OpenStack)<br />
* ''Juno'' (EOL by OpenStack)<br />
* '''Kilo''' (Security-supported, EOL: 2016-05-02)<br />
* '''Liberty''' (Current stable release)<br />
<br />
See http://releases.openstack.org/ for more details on the OpenStack releases.<br />
<br />
== Integration components ==<br />
<br />
Which components must be installed and configured depends on the services the RC wants to provide.<br />
* Keystone must be always available<br />
* If providing '''VM Management''' features (OCCI access or OpenStack access), then '''Nova, Cinder and Glance''' must be available; also '''Neutron''' is needed, but nova-network can also be used for legacy installations (see [http://docs.openstack.org/havana/install-guide/install/yum/content/section_networking-routers-with-private-networks.html here] how to configure per-tenant routers with private networks).<br />
<br />
[[File:Openstack-fedcloud.png|800px]]<br />
<br />
As you can see from the schema above, the integration is performed installing some EGI extensions on top of the OpenStack components. <br />
<br />
*'''Keystone-VOMS Authorization plugin''' allow users with a valid VOMS proxy to access the OpenStack deployment<br />
*'''OpenStack OCCI Interface (ooi)''' translates between OpenStack API and OCCI<br />
*'''cASO''' collects accounting data from OpenStack <br />
*'''SSM''' sends the records extracted by cASO to the central accounting database on the EGI Accounting service (APEL)<br />
*'''BDII cloud provider''' registers the RC configuration and description through the EGI Information System to facilitate service discovery<br />
*'''vmcatcher''' checks the [https://appdb.egi.eu/browse/cloud EGI App DB] for new or updated images that can be provided by the RC to the user communities (VO) supported<br />
*The vmcatcher hooks ('''glancepush''' and '''OpenStack handler for vmcatcher''') push updated subscribed images from vmcatcher to Glance, using Openstack Python API <br />
<br />
== EGI User Management/AAI (Keystone-VOMS) ==<br />
<br />
Every FedCloud site must support authentication of users with X.509 certificates with VOMS extensions. The Keystone-VOMS extension enables this kind of authentication on Keystone. <br />
<br />
Documentation on the installation is available on https://keystone-voms.readthedocs.org/<br />
<br />
Notes: <br />
* '''You need a host certificate from a recognised CA for your keystone server'''. <br />
* Take into account that using keystone-voms plugin will enforce the use of https for your Keystone service, you will need to update your URLs at the Keystone catalog and in the configuration of your services:<br />
** You will probably need to include your CA to your system's CA bundle to avoid certificate validation issues: <code>/etc/ssl/certs/ca-certificates.crt</code> from the <code>ca-certificates</code> package on Debian/Ubuntu systems or <code>/etc/pki/tls/certs/ca-bundle.crt</code> from the <code>ca-certificates</code> on RH and derived systems. The [[Federated_Cloud_APIs_and_SDKs#CA_CertificatesCheck|Federated Cloud OpenStack Client guide]] includes information on how to do it.<br />
** replace http with https in <code>auth_[protocol|uri|url]</code> and <code>auth_[host|uri|url]</code> in the nova, cinder, glance and neutron config files (<code>/etc/nova/nova.conf</code>, <code>/etc/nova/api-paste.ini</code>, <code>/etc/neutron/neutron.conf</code>, <code>/etc/neutron/api-paste.ini</code>, <code>/etc/neutron/metadata_agent.ini</code>, <code>/etc/cinder/cinder.conf</code>, <code>/etc/cinder/api-paste.ini</code>, <code>/etc/glance/glance-api.conf</code>, <code>/etc/glance/glance-registry.conf</code>, <code>/etc/glance/glance-cache.conf</code>) and any other service that needs to check keystone tokens. <br />
** You can update the URLs of the services directly in the database:<br />
<pre><br />
mysql> use keystone;<br />
mysql> update endpoint set url="https://<keystone-host>:5000/v2.0" where url="http://<keystone-host>:5000/v2.0";<br />
mysql> update endpoint set url="https://<keystone-host>:35357/v2.0" where url="http://<keystone-host>:35357/v2.0";<br />
</pre><br />
<br />
* Support for EGI VOs: [[HOWTO16 | VOMS configuration]], you should configure fedcloud.egi.eu, dteam and ops VOs.<br />
<br />
* VOMS-Keystone configuration: most sites should enable the <code>autocreate_users</code> option in the <code>[voms]</code> section of [https://keystone-voms.readthedocs.org/en/latest/configuration.html Keystone-VOMS configuration]. This will enable that new users are automatically created in your local keystone the first time they login into your site.<br />
<br />
* if (and only if) you need to configure the Per-User Subproxy (PUSP) feature, please follow the specific guide at [[Long-tail_of_science#Instructions_for_OpenStack_providers]]<br />
<br />
== EGI Virtual Machine Management Interface ==<br />
<br />
EGI currently operates two realms: the Open Standards Realm and the OpenStack Realm. Both are completely integrated with the EGI federation services but they expose different interfaces to offer IaaS capabilities to the users. The Open Standards Realm uses OCCI standard (supported by providers with OpenNebula, OpenStack and Synnefo cloud management frameworks), while the OpenStack Realm uses the OpenStack native Nova API (support limited to OpenStack providers). <br />
<br />
You can provide your service in one or both of the realms. For the OpenStack Realm, you just need to declare your endpoint in GOCDB as described below. For the Open Standards realm you will need to deploy an additional service for providing OCCI access.<br />
<br />
[https://github.com/openstack/ooi ooi] is the recommended software to provide OCCI for OpenStack (from Juno onwards). Installation and configuration of ooi is available at [http://ooi.readthedocs.org/en/stable/index.html ooi documentation]. Packages for several distributions can be found at [https://appdb.egi.eu/store/software/ooi ooi entry at EGI's AppDB] (recommended version is 0.2.0)<br />
<br />
For older OpenStack releases [https://github.com/EGI-FCTF/OCCI-OS OCCI-OS] can be used. Follow the <code>README.md</code> file in the github repo for instructions on installation and configuration. Be sure to select the branch (e.g. <code>stable/icehouse</code>) corresponding to your OpenStack deployment.<br />
<br />
Once the OCCI interface is installed, you should register it on your installation (adapt the region and URL to your deployment):<br />
<pre><br />
$ openstack service create --name occi --description "OCCI Interface" occi<br />
+-------------+----------------------------------+<br />
| Field | Value |<br />
+-------------+----------------------------------+<br />
| description | OCCI Interface |<br />
| enabled | True |<br />
| id | 6dfd6a56c9a6456b84e8c86038e58f56 |<br />
| name | occi |<br />
| type | occi |<br />
+-------------+----------------------------------+<br />
<br />
$ openstack endpoint create --region RegionOne occi --publicurl http://172.16.4.70:8787/occi1.1<br />
<br />
+-------------+----------------------------------+<br />
| Property | Value |<br />
+-------------+----------------------------------+<br />
| description | OCCI service |<br />
| id | 8e6de5d0d7624584bed6bec9bef7c9e0 |<br />
| name | occi_api |<br />
| type | occi |<br />
+-------------+----------------------------------+<br />
</pre><br />
<br />
== Integration with EGI FedCloud Appliance ==<br />
<br />
The EGI FedCloud Appliance packages a set of docker containers to federate a OpenStack deployment with some EGI services:<br />
* Information System (BDII)<br />
* Accounting (cASO, SSM)<br />
* Image management (atrope)<br />
<br />
You can get the current version of the appliance at [https://appdb.egi.eu/store/vappliance/fedcloud.integration.appliance.openstack AppDB entry]. It is available as an OVA file. You can easily extract the VMDK disk of the OVA by untaring the file.<br />
<br />
=== Pre-requisites ===<br />
<br />
The appliance works by querying the public APIs of an existing OpenStack installation. It assumes [http://keystone-voms.readthedocs.org/ Keystone-VOMS] is installed at that OpenStack and the <code>voms.json</code> file is properly configured.<br />
<br />
The appliance uses the following OpenStack APIs:<br />
* nova, for getting images and flavors available and to get usage information<br />
* keystone, for authentication and for getting the available tenants<br />
* glance, for querying, uploading and removing VM images.<br />
<br />
Not all services need to be accessed with the same credentials. Each component is individually configured.<br />
<br />
A host certificate is to send the accounting information before sending it to the accounting repository. DN of the host certificate must be registered in GOCDB service type eu.egi.cloud.accounting (see the [[MAN10#Registration.2C_validation_and_certification|registration section]] below for more information).<br />
<br />
'''Note:'''<br />
* VM Image replication requires large disk space for storing the downloaded images. By default these are stored at <code>/image_data</code>. You can mount a volume at that location.<br />
* The appliance should be accessible by the EGI Information System. EGI information system will check GOCDB for the exact location of your appliance (see the [[MAN10#Registration.2C_validation_and_certification|registration section]] below for more information).<br />
<br />
=== EGI Accounting (cASO/SSM) ===<br />
<br />
There are two different processes handling the accounting integration:<br />
* cASO, which connects to the OpenStack deployment to get the usage information, and,<br />
* ssmsend, which sends that usage information to the central EGI accounting repository.<br />
<br />
They are run by cron every hour (cASO) and every six hours (ssmsend).<br />
<br />
[http://caso.readthedocs.org/en/latest/configuration.html cASO configuration] is stored at <code>/etc/caso/caso.conf</code>. Most default values are ok, but you must set:<br />
<br />
* <code>site_name</code> (line 100)<br />
* <code>tenants</code> (line 104)<br />
* credentials to access the accounting data (lines 122-128). Check the [http://caso.readthedocs.org/en/latest/configuration.html#openstack-configuration cASO documentation] for the expected permissions of the user configured here.<br />
<br />
The cron job will use the voms mapping file at <code>/etc/voms.json</code>.<br />
<br />
cASO will write records to <code>/var/spool/apel</code> where ssmsend will take them.<br />
<br />
SSM configuration is available at <code>/etc/apel</code>. Defaults should be ok for most cases. The cron file uses <code>/etc/grid-security</code> for the CAs and the host certificate and private keys (in <code>/etc/grid-security/hostcert.pem</code> and <code>/etc/grid-security/hostkey.pem</code>).<br />
<br />
==== Running the services ====<br />
<br />
Both caso and ssmsend are run via cron scripts. They are located at <code>/etc/cron.d/caso</code> and <code>/etc/crond.d/ssmsend</code> respectively. For convenience there are also two scripts <code>/usr/loca/bin/caso-extract.sh</code> and <code>/usr/local/bin/ssm-send.sh</code> that run the docker container with the proper volumes.<br />
<br />
=== EGI Information System (BDII) ===<br />
<br />
Information discovery provides a real-time view about the actual images and flavors available at the OpenStack for the federation users. It has two components:<br />
<br />
* Resource-Level BDII: which queries the OpenStack deployment to get the information to publish<br />
<br />
* Site-Level BDII: gathers information from several resource-level BDIIs (in this case only 1) and makes it publicly available for the EGI information system.<br />
<br />
==== Resource-level BDII ====<br />
<br />
This is provided by container <code>egifedcloud/cloudbdii</code>. You need to configure:<br />
<br />
* <code>/etc/cloud-info-provider/openstack.rc</code>, with the credentials to query your OpenStack. The user configured just needs to be able to access the lists of images and flavors.<br />
<br />
* <code>/etc/cloud-info-provider/openstack.yaml</code>, this file includes the static information of your deployment. Make sure to set the <code>SITE-NAME</code> as defined in GOCDB.<br />
<br />
==== Site-level BDII ====<br />
<br />
The <code>egifedcloud/sitebdii</code> container runs this process. Configuration files:<br />
* <code>/etc/sitebdii/glite-info-site-defaults.conf</code>. Set here the name of your site (as defined in GOCDB) and the public hostname where the appliance will be available.<br />
<br />
* <code>/etc/sitebdii/site.cfg</code>. Include here basic information on your site.<br />
<br />
==== Running the services ====<br />
<br />
In order to run the information discovery containers, there is a docker-compose file at <code>/etc/sitebdii/docker-compose.yml</code>. Run it with:<br />
<br />
docker-compose -f /etc/sitebdii/docker-compose.yml up -d<br />
<br />
Check the status with:<br />
<br />
docker-compose -f /etc/sitebdii/docker-compose.yml ps<br />
<br />
You should be able to get the BDII information with an LDAP client, e.g.:<br />
<br />
ldapsearch -x -p 2170 -h <yourVM.hostname.domain.com> -b o=glue<br />
<br />
=== EGI Image Management (atrope) ===<br />
<br />
The appliance provide VMI replication with [https://github.com/alvarolopez/atrope atrope], an alternative implementation to vmcatcher. Every 12 hours, the appliance will perform the following actions:<br />
* download the configured lists in <code>/etc/atrope/hepix.yaml</code> and verify its signature <br />
* check any changes in the lists and download new images<br />
* synchronise this information to the configured glance endpoint<br />
<br />
Configure the glance credentials in the <code>/etc/atrope/atrope.conf</code> file and add the lists you want to download at the <code>/etc/atrope/hepix.yaml</code>. See the following example for fedcloud.egi.vo list:<br />
<br />
<pre><br />
# This must match the VO name configured at the voms.json file<br />
fedcloud.egi.eu:<br />
url: https://vmcaster.appdb.egi.eu/store/vo/fedcloud.egi.eu/image.list<br />
enabled: true<br />
# All image lists from AppDB will have this endorser<br />
endorser:<br />
dn: '/DC=EU/DC=EGI/C=NL/O=Hosts/O=EGI.eu/CN=appdb.egi.eu'<br />
ca: "/DC=ORG/DC=SEE-GRID/CN=SEE-GRID CA 2013"<br />
# You must get this from AppDB<br />
token: 17580f07-1e33-4a38-94e3-3386daced5be<br />
# if you want to restrict the images downloaded from the AppDB, you can add here a list of the identifiers<br />
# check the "dc:identifier" field in the image list file.<br />
images: []<br />
# images names will prefixed with this string for easy identification<br />
prefix: "FEDCLOUD "<br />
</pre><br />
<br />
Check [https://wiki.appdb.egi.eu/main:faq:how_to_subscribe_to_a_private_image_list_using_the_vmcatcher How to subscribe to a private image list] for instructions to get the URL and token. The <code>prefix</code> if specified will be added in the image title in glance. You can define a subset of images to download with the <code>images</code> field.<br />
<br />
==== Running the service ====<br />
<br />
atrope is run via a cron scripts: <code>/etc/cron.d/atrope</code>. For convenience the <code>/usr/loca/bin/atrope-dispatch.sh</code> script runs the docker container with the proper volumes.<br />
<br />
== Integration with individual components ==<br />
=== EGI Accounting (cASO/SSM) ===<br />
<br />
Every cloud RC should publish utilization data to the EGI accounting database. You will need to install '''cASO''', a pluggable extractor of Cloud Accounting Usage Records from OpenStack.<br />
<br />
Documentation on how to install and configure cASO is available at https://caso.readthedocs.org/en/latest/<br />
<br />
In order to send the records to the accounting database, you will also need to configure '''SSM''', whose documentation can be found at https://github.com/apel/ssm<br />
<br />
=== EGI Information System (BDII) ===<br />
<br />
Sites must publish information to EGI information system which is based on BDII. The BDII can be installed easily directly from the distribution repository, the package is usually named "bdii". <br />
<br />
There is a common cloud information provider for all cloud management frameworks that collects the information from the used CMF and send them to the aforementioned BDII. It can be installed on the same machine as the BDII or on another machine. The installation and configuration guide for the cloud information provider can be found in the following [[HOWTO15|Fedclouds BDII instructions]]; more detailed installation and configuration instructions are available at https://github.com/EGI-FCTF/cloud-bdii-provider<br />
<br />
=== EGI Image Management (vmcatcher, glancepush) ===<br />
<br />
Sites in FedCloud offering VM management capability must give access to VO-endorsed VM images. This functionality is provided with vmcatcher (that is able to subscribe to the image lists available in AppDB) and a set of tools that are able to push the subscribed images into the glance catalog. In order to subscribe to VO-wide image lists, you need to have a valid access token to the AppDB. Check [https://wiki.appdb.egi.eu/main:faq:how_to_get_access_to_vo-wide_image_lists how to access to VO-wide image lists] and [https://wiki.appdb.egi.eu/main:faq:how_to_subscribe_to_a_private_image_list_using_the_vmcatcher how to subscribe to a private image list] documentation for more information.<br />
<br />
Please refer to [https://github.com/hepix-virtualisation/vmcatcher vmcatcher documentation] for installation. <br />
<br />
Vmcatcher may be branched to Openstack Glance catalog using [https://appdb.egi.eu/store/software/python.glancepush python-glancepush] tool and [https://appdb.egi.eu/store/software/openstack.handler.for.vmcatcher Openstack Handler for Vmcatcher] event handler. To install and configure glancepush and the handler, you can refer to the following instructions: <br />
<br />
*Install the latest release of glancepush from https://appdb.egi.eu/store/software/python.glancepush <br />
**for debian based systems, just download the tarball, extract it, and execute python setup.py install<br />
<br />
[stack@ubuntu]$ wget http://repository.egi.eu/community/software/python.glancepush/0.0.X/releases/generic/0.0.6/python-glancepush-0.0.6.tar.gz<br />
[stack@ubuntu]$ tar -zxvf python-glancepush-0.0.6.tar.gz<br />
[stack@ubuntu]$ python setup.py install<br />
<br />
**for RHEL6 you can run: <br />
<br />
[stack@rhel]$ yum localinstall http://repository.egi.eu/community/software/python.glancepush/0.0.X/releases/sl/6/x86_64/RPMS/python-glancepush-0.0.6-1.noarch.rpm<br />
<br />
*Then, configure glancepush directories<br />
<br />
[stack@ubuntu]$ sudo mkdir -p /var/spool/glancepush /etc/glancepush/log /etc/glancepush/transform/ /etc/glancepush/clouds /var/log/glancepush<br />
[stack@ubuntu]$ sudo chown stack:stack -R /var/spool/glancepush /etc/glancepush /var/log/glancepush/<br />
<br />
*Copy the file /etc/keystone/voms.json to /etc/glancepush/voms.json. Then create a file in clouds file for every VO to which you are subscribed. For example, if you're subscribed to fedcloud, atlas and lhcb, you'll need 3 files in the /etc/glancepush/clouds directory with the credentials for this VO/tenants, for example:<br />
<br />
[general]<br />
# Tenant for this VO. Must match the tenant defined in voms.json file<br />
testing_tenant=egi<br />
# Identity service endpoint (Keystone)<br />
endpoint_url=https://server4-eupt.unizar.es:5000/v2.0<br />
# User Password<br />
password=123456<br />
# User<br />
username=John<br />
# Set this to true if you're NOT using self-signed certificates<br />
is_secure=True<br />
# SSH private key that will be used to perform policy checks (to be done)<br />
ssh_key=Carlos_lxbifi81<br />
# WARNING: Only define the next variable if you're going to need it. Otherwise you may encounter problems<br />
cacert=path_to_your_cert<br />
<br />
*Install [https://appdb.egi.eu/store/software/openstack.handler.for.vmcatcher Openstack handler for vmcatcher]. For debian based systems, just download the tarball, extract it and execute python setup.py install<br />
<br />
[stack@ubuntu]$ wget http://repository.egi.eu/community/software/openstack.handler.for.vmcatcher/0.0.X/releases/generic/0.0.7/gpvcmupdate-0.0.7.tar.gz<br />
[stack@ubuntu]$ tar -zxvf gpvcmupdate-0.0.7.tar.gz<br />
[stack@ubuntu]$ python setup.py install<br />
<br />
while for RHEL6 you can run: <br />
<br />
[stack@rhel]$ yum localinstall http://repository.egi.eu/community/software/openstack.handler.for.vmcatcher/0.0.X/releases/sl/6/x86_64/RPMS/gpvcmupdate-0.0.7-1.noarch.rpm<br />
<br />
*Create the vmcatcher folders for OpenStack<br />
<br />
[stack@ubuntu]$ mkdir -p /opt/stack/vmcatcher/cache /opt/stack/vmcatcher/cache/partial /opt/stack/vmcatcher/cache/expired<br />
<br />
*Check that vmcatcher is running properly by listing and subscribing to an image list<br />
<br />
[stack@ubuntu]$ export VMCATCHER_RDBMS="sqlite:////opt/stack/vmcatcher/vmcatcher.db"<br />
[stack@ubuntu]$ vmcatcher_subscribe -l<br />
[stack@ubuntu]$ vmcatcher_subscribe -e -s https://vmcaster.appdb.egi.eu/store/vappliance/tinycorelinux/image.list<br />
[stack@ubuntu]$ vmcatcher_subscribe -l<br />
8ddbd4f6-fb95-4917-b105-c89b5df99dda True None https://vmcaster.appdb.egi.eu/store/vappliance/tinycorelinux/image.list<br />
<br />
*Create a CRON wrapper for vmcatcher, named <code>$HOME/gpvcmupdate/vmcatcher_eventHndl_OS_cron.sh</code>, using the following code<br />
<br />
#!/bin/bash<br />
#Cron handler for VMCatcher image syncronization script for OpenStack<br />
<br />
#Vmcatcher configuration variables<br />
export VMCATCHER_RDBMS="sqlite:////opt/stack/vmcatcher/vmcatcher.db"<br />
export VMCATCHER_CACHE_DIR_CACHE="/opt/stack/vmcatcher/cache"<br />
export VMCATCHER_CACHE_DIR_DOWNLOAD="/opt/stack/vmcatcher/cache/partial"<br />
export VMCATCHER_CACHE_DIR_EXPIRE="/opt/stack/vmcatcher/cache/expired"<br />
export VMCATCHER_CACHE_EVENT="python $HOME/gpvcmupdate/gpvcmupdate.py -D"<br />
<br />
#Update vmcatcher image lists<br />
vmcatcher_subscribe -U<br />
<br />
#Add all the new images to the cache<br />
for a in `vmcatcher_image -l | awk '{if ($2==2) print $1}'`; do<br />
vmcatcher_image -a -u $a<br />
done <br />
<br />
#Update the cache<br />
vmcatcher_cache -v -v<br />
<br />
#Run glancepush<br />
/usr/bin/glancepush.py<br />
<br />
*Set the newly created file as executable<br />
<br />
[stack@ubuntu]$ chmod +x $HOME/gpvcmupdate/vmcatcher_eventHndl_OS_cron.sh<br />
<br />
*Test that the vmcatcher handler is working correctly by running<br />
<br />
[stack@ubuntu]$ $HOME/gpvcmupdate/vmcatcher_eventHndl_OS_cron.sh<br />
INFO:main:Defaulting actions as 'expire', and 'download'.<br />
DEBUG:Events:event 'ProcessPrefix' executed 'python /opt/stack/gpvcmupdate/gpvcmupdate.py'<br />
DEBUG:Events:stdout=<br />
DEBUG:Events:stderr=Ignoring ProcessPrefix event.<br />
INFO:DownloadDir:Downloading '541b01a8-94bd-4545-83a8-6ea07209b440'.<br />
DEBUG:Events:event 'AvailablePrefix' executed 'python /opt/stack/gpvcmupdate/gpvcmupdate.py'<br />
DEBUG:Events:stdout=AvailablePrefix<br />
DEBUG:Events:stderr=<br />
INFO:CacheMan:moved file 541b01a8-94bd-4545-83a8-6ea07209b440<br />
DEBUG:Events:event 'AvailablePostfix' executed 'python /opt/stack/gpvcmupdate/gpvcmupdate.py'<br />
DEBUG:Events:stdout=AvailablePostfixCreating Metadata Files<br />
DEBUG:Events:stderr=<br />
DEBUG:Events:event 'ProcessPostfix' executed 'python /opt/stack/gpvcmupdate/gpvcmupdate.py'<br />
DEBUG:Events:stdout=<br />
DEBUG:Events:stderr=Ignoring ProcessPostfix event.<br />
<br />
<br> <br />
<br />
*Add the following line to the stack user crontab:<br />
<br />
50 */6 * * * $HOME/gpvcmupdate/vmcatcher_eventHndl_OS_cron.sh &gt;&gt; /var/log/glancepush/vmcatcher.log 2&gt;&amp;1<br />
<br />
''NOTES:'' <br />
<br />
*It is recommended to execute glancepush and vmcatcher_cache as stack or other non-root user. <br />
*VMcatcher expired images are removed from OS.<br />
<br />
== Post-installation ==<br />
<br />
After the installation of all the needed components, it is recommended to set the following policies on Nova to avoid users accessing other users resources:<br />
<pre><br />
[root@egi-cloud]# sed -i 's|"admin_or_owner": "is_admin:True or project_id:%(project_id)s",|"admin_or_owner": "is_admin:True or project_id:%(project_id)s",\n "admin_or_user": "is_admin:True or user_id:%(user_id)s",|g' /etc/nova/policy.json<br />
[root@egi-cloud]# sed -i 's|"default": "rule:admin_or_owner",|"default": "rule:admin_or_user",|g' /etc/nova/policy.json<br />
[root@egi-cloud]# sed -i 's|"compute:get_all": "",|"compute:get": "rule:admin_or_owner",\n "compute:get_all": "",|g' /etc/nova/policy.json<br />
</pre><br />
<br />
== Registration, validation and certification ==<br />
<br />
As mentioned in the [https://wiki.egi.eu/wiki/Federated_Cloud_resource_providers_support main page], RC services must be '''registered''' in the [https://goc.egi.eu EGI Configuration Management Database (GOCDB)]. If you are creating a new site for your cloud services, please follow the [https://wiki.egi.eu/wiki/PROC09 Resource Centre Registration and Certification] with the help of EGI Operations and your reference Resource Infrastructure. <br />
<br />
You will need to register the following services (all of them can be provided by the Federated Cloud Appliance):<br />
* '''Site-BDII'''. This service collects and publishes site's data for the Information System. Existing sites should already have this registered. <br />
* '''eu.egi.cloud.accounting'''. Register here the host sending the records to the accounting repository (executing SSM send).<br />
* '''eu.egi.cloud.vm-metadata.vmcatcher''' for the VMI replication mechanism. Register here the host providing the replication. <br />
<br />
If offering OCCI interface, the site must register also:<br />
* '''eu.egi.cloud.vm-management.occi''' for the OCCI endpoint offered by the site. Please note the special endpoint URL syntax described at [[Federated_Cloud_Technology#eu.egi.cloud.vm-management.occi|GOCDB usage in FedCloud]]<br />
<br />
If offering native OpenStack access, you must register:<br />
* '''org.openstack.nova''' for the Nova endpoint of the site. Please note the special endpoint URL syntax described at [[Federated_Cloud_Technology#org.openstack.nova|GOCDB usage in FedCloud]]<br />
<br />
<br />
Site should also declare the following properties using the ''Site Extension Properties'' feature:<br />
*# Max number of virtual cores for VM with parameter name: <code>cloud_max_cores4VM</code> <br />
*# Max amount of RAM for VM with parameter name: <code>cloud_max_RAM4VM</code> using the format: value+unit, e.g. "16GB".<br />
*# Max amount of storage that could be mounted in a VM with parameter name: <code>cloud_max_storage4VM</code> using the format: value+unit, e.g. "16GB".<br />
<br />
The '''installation validation''' is part of the aforementioned [https://wiki.egi.eu/wiki/PROC09 Resource Centre Registration and Certification] procedure. After you register the services in GOCDB, EGI Operations will test your services using the [[HOWTO04_Site_Certification_Manual_tests#Check_the_functionality_of_the_cloud_elements|site certification manual tests]] mentioned in the same procedure. It is important to use that guide to test the services published to check that they are behaving properly.<br />
<br />
Once the site services are registered in GOCDB (and flagged as "monitored") they will appear in the EGI service monitoring tools. EGI will check the status of the services (see [https://wiki.egi.eu/wiki/Federated_Cloud_infrastructure_status Infrastructure Status] for details). Check if your services are present in the EGI service monitoring tools and passing the tests; if you experience any issues (services not shown, services are not OK...) please contact back EGI Operations or your reference Resource Infrastructure.<br />
<br />
= Integrating OpenNebula =<br />
<br />
EGI Cloud Site based on OpenNebula is an ordinary OpenNebula installation with some EGI-specific integration components. There are no additional requirements placed on internal site architecture.<br />
<br />
[[File:OpenNebulaSite.png]]<br />
<br />
The following '''components''' must be installed alongside OpenNebula:<br />
* '''vmcatcher''', which checks the [https://appdb.egi.eu/browse/cloud EGI App DB] for new or updated images that need to be supported on the site. It downloads images and registers them with OpenNebula, so that they can be used in resource instantiation. Vmcatcher configuration is [[#EGI_Image_Management_2|explained bellow]].<br />
* '''rOCCI-server''', which provides a standard OCCI interface. It translates between OpenNebula API and OCCI. It must be configured to use its ''opennebula'' backend, and to use ''voms'' for authentication. Follow the [[rOCCI:ROCCI-server_Admin_Guide|rOCCI-server Admin Guide]] for installation, and check [[#rOCCI-server + VOMS|bellow]] for FedCloud-specific configuration.<br />
* '''local perun scripts''', which allow perun to set up, block and remove user accounts from OpenNebula, thus managing the full life cycle of a user account. Local script configuration is [[#Perun integration|explained bellow]].<br />
* '''oneacct''' scripts, which collect accounting data from OpenNebula and publish those into EGI's APEL instance. Oneacct configuration is explained at the [[Fedcloud-tf:WorkGroups:Scenario4#OpenNebula_Accounting_Scripts|FedCloud Accounting]] page.<br />
* '''BDII''', which registers the site's configuration and description through the EGI Information System to facilitate service discovery. Configuration is [[#EGI_Information_System_2|explained bellow]].<br />
<br />
Please consider that:<br />
<br />
* '''CDMI''' storage endpoints are currently '''not supported''' for OpenNebula-based sites.<br />
<br />
* OpenNebula ''Sunstone'' is '''not''' required!<br />
<br />
The following '''ports''' must be open to allow access to an OpenNebula-based FedCloud sites:<br />
<br />
{| class="wikitable" style="margin: auto; margin-top: 30px; margin-bottom: 30px;"<br />
|+ Open Ports for OpenNebula and other components in FedCloud!<br />
! style="width: 90px;" | Port<br />
! style="width: 110px;" | Application<br />
! style="width: 430px;" | Host<br />
! style="width: 250px;" | Note<br />
|-<br />
|'''22'''/TCP<br />
|'''SSH'''<br />
|OpenNebula '''Server''' Node<br />
|<code>one</code>tools, Perun scripts<br />
|-<br />
|'''2170'''/TCP<br />
|'''BDII'''/LDAP<br />
|BDDI Node (typically the OpenNebula '''Server''' Node)<br />
|EGI Service Discovery<br />
|-<br />
|'''11443'''/TCP<br />
|'''OCCI'''/HTTPs<br />
|'''rOCCI-server''' node (typically the OpenNebula Server Node but can be located elsewhere)<br />
|OCCI cloud resource management<br />
|}<br />
<br />
By nature, open ports cannot be specified for '''OpenNebula hosts''', which are used to run virtual machines. Their requirements for open ports cannot be known beforehand.<br />
<br />
This is an overview of '''service accounts''' used in an OpenNebula-based FedCloud site. The names are default and can be changed if required.<br />
<br />
{| class="wikitable" style="margin: auto; margin-top: 30px; margin-bottom: 30px;"<br />
|+ Service Accounts in OpenNebula sites in FedCloud!<br />
! style="width: 90px;" | Type<br />
! style="width: 110px;" | Account name<br />
! style="width: 180px;" | Host<br />
! style="width: 500px;" | Use<br />
|-<br />
|rowspan="4"|System accounts<br />
|<code>oneadmin</code><br />
|OpenNebula Server<br />
|'''Default''' management account in OpenNebula. Also used by the '''Perun''' scripts, which access the account with SSH.<br />
|-<br />
|<code>rocci</code><br />
|rOCCI-server host (typically OpenNebula server)<br />
|Apache application processes for the '''rOCCI-server'''. It is only a service account, no access required.<br />
|-<br />
|<code>apel</code><br />
|OpenNebula server<br />
|Service account used to run '''APEL export''' scripts. Just a service account, no access required.<br />
|-<br />
|<code>openldap</code><br />
|OpenNebula server<br />
|Service account used to run LDAP for '''BDII'''. Just a service account, no access required.<br />
|-<br />
|OpenNebula accounts<br />
|<code>rocci</code><br />
|OpenNebula Server<br />
|Used by the '''rOCCI-server''' to perform tasks through the OpenNebula API.<br />
|}<br />
<br />
Follow [http://opennebula.org/documentation/ OpenNebula Documentation] and '''install OpenNebula with enabled X.509 authentication support'''.<br />
<br />
The following OpenNebula versions are supported:<br />
* OpenNebula v4.4.x (legacy)<br />
* OpenNebula v4.6.x<br />
* OpenNebula v4.8.x<br />
* OpenNebula v4.10.x<br />
* OpenNebula v4.12.x<br />
* OpenNebula v4.14.x<br />
<br />
Integration Prerequisites:<br />
* Working OpenNebula installation with X.509 support enabled. Resource Centres are encouraged to follow the [http://docs.opennebula.org/4.12/administration/authentication/x509_auth.html step-by-step configuration guide provided by OpenNebula developers]. There is no need to change authentication driver for the oneadmin user or create any user accounts manually at this time. <br />
* Valid IGTF-trusted host certificates for selected hosts.<br />
<br />
=== EGI Virtual Machine Management Interface -- OCCI ===<br />
<br />
See [[rOCCI:ROCCI-server_Admin_Guide|rOCCI-server Installation Guide]].<br />
<br />
=== EGI User Management/AAI ===<br />
<br />
==== rOCCI-server + VOMS ====<br />
<br />
*Configure OpenNebula's x509 auth, modify /etc/one/auth/x509_auth.conf file:<br />
<br />
# Path to the trusted CA directory. It should contain the trusted CA's for<br />
# the server, each CA certificate shoud be name CA_hash.0<br />
:ca_dir: "/etc/grid-security/certificates"<br />
<br />
For more information have a look at the official OpenNebula documentation [http://opennebula.org/documentation]<br />
<br />
*rOCCI-server <br />
Example VHOST configuration file for Apache2 with only VOMS authentication enabled:<br />
<br />
<pre><br />
<VirtualHost *:11443><br />
# if you wish to change the default Ruby used to run this app<br />
PassengerRuby /opt/occi-server/embedded/bin/ruby<br />
<br />
# enable SSL<br />
SSLEngine on<br />
<br />
# for security reasons you may restrict the SSL protocol, but some clients may fail if SSLv2 is not supported<br />
SSLProtocol All -SSLv2 -SSLv3<br />
<br />
# this should point to your server host certificate<br />
SSLCertificateFile /etc/grid-security/hostcert.pem<br />
<br />
# this should point to your server host key<br />
SSLCertificateKeyFile /etc/grid-security/hostkey.pem<br />
<br />
# directory containing the Root CA certificates and their hashes<br />
SSLCACertificatePath /etc/grid-security/certificates<br />
<br />
# directory containing CRLs<br />
SSLCARevocationPath /etc/grid-security/certificates<br />
<br />
# set to optional, this tells Apache to attempt to verify SSL certificates if provided<br />
# for X.509 access with GridSite/VOMS, however, set to 'require'<br />
#SSLVerifyClient optional<br />
SSLVerifyClient require<br />
<br />
# if you have multiple CAs in the file above, you may need to increase the verify depht<br />
SSLVerifyDepth 10<br />
<br />
# enable passing of SSL variables to passenger. For GridSite/VOMS, enable also exporting certificate data<br />
SSLOptions +StdEnvVars +ExportCertData<br />
<br />
# configure OpenSSL inside rOCCI-server to validate peer certificates (for CMFs)<br />
#SetEnv SSL_CERT_FILE /path/to/ca_bundle.crt<br />
SetEnv SSL_CERT_DIR /etc/grid-security/certificates<br />
<br />
# set RackEnv<br />
RackEnv production<br />
LogLevel info<br />
<br />
ServerName occi.host.example.org<br />
# important, this needs to point to the public folder of your rOCCI-server<br />
DocumentRoot /opt/occi-server/embedded/app/rOCCI-server/public<br />
<Directory /opt/occi-server/embedded/app/rOCCI-server/public><br />
## variables (and is needed for gridsite-admin.cgi to work.)<br />
GridSiteEnvs on<br />
## Nice GridSite directory listings (without truncating file names!)<br />
GridSiteIndexes off<br />
## If this is greater than zero, we will accept GSI Proxies for clients<br />
## (full client certificates - eg inside web browsers - are always ok)<br />
GridSiteGSIProxyLimit 4<br />
## This directive allows authorized people to write/delete files<br />
## from non-browser clients - eg with htcp(1)<br />
GridSiteMethods ""<br />
<br />
Allow from all<br />
Options -MultiViews<br />
</Directory><br />
<br />
# configuration for Passenger<br />
PassengerUser rocci<br />
PassengerGroup rocci<br />
PassengerMinInstances 3<br />
PassengerFriendlyErrorPages off<br />
<br />
# configuration for rOCCI-server<br />
## common<br />
SetEnv ROCCI_SERVER_LOG_DIR /var/log/occi-server<br />
SetEnv ROCCI_SERVER_ETC_DIR /etc/occi-server<br />
<br />
SetEnv ROCCI_SERVER_PROTOCOL https<br />
SetEnv ROCCI_SERVER_HOSTNAME occi.host.example.org<br />
SetEnv ROCCI_SERVER_PORT 11443<br />
SetEnv ROCCI_SERVER_AUTHN_STRATEGIES "voms"<br />
SetEnv ROCCI_SERVER_HOOKS oneuser_autocreate<br />
SetEnv ROCCI_SERVER_BACKEND opennebula<br />
SetEnv ROCCI_SERVER_LOG_LEVEL info<br />
SetEnv ROCCI_SERVER_LOG_REQUESTS_IN_DEBUG no<br />
SetEnv ROCCI_SERVER_TMP /tmp/occi_server<br />
SetEnv ROCCI_SERVER_MEMCACHES localhost:11211<br />
<br />
## experimental<br />
SetEnv ROCCI_SERVER_ALLOW_EXPERIMENTAL_MIMES no<br />
<br />
## authN configuration<br />
SetEnv ROCCI_SERVER_AUTHN_VOMS_ROBOT_SUBPROXY_IDENTITY_ENABLE no<br />
<br />
## hooks<br />
#SetEnv ROCCI_SERVER_USER_BLACKLIST_HOOK_USER_BLACKLIST "/path/to/yml/file.yml"<br />
#SetEnv ROCCI_SERVER_USER_BLACKLIST_HOOK_FILTERED_STRATEGIES "voms x509 basic"<br />
SetEnv ROCCI_SERVER_ONEUSER_AUTOCREATE_HOOK_VO_NAMES "dteam ops"<br />
<br />
## ONE backend<br />
SetEnv ROCCI_SERVER_ONE_XMLRPC http://localhost:2633/RPC2<br />
SetEnv ROCCI_SERVER_ONE_USER rocci<br />
SetEnv ROCCI_SERVER_ONE_PASSWD yourincrediblylonganddifficulttoguesspassword<br />
</VirtualHost><br />
</pre><br />
<br />
It is strongly recommended to set '''SSLVerifyClient require''' and '''SetEnv ROCCI_SERVER_AUTHN_STRATEGIES "voms"'''!<br />
<br />
* Support for EGI VOs: [[HOWTO16 | VOMS configuration]]<br />
* Create empty groups ''fedcloud.egi.eu'', ''ops'' and ''dteam'' in OpenNebula.<br />
<br />
==== Perun integration ====<br />
The current rOCCI-server implementation doesn’t handle user management and identity propagation hence integration with a third-party service is necessary. The [https://perun.metacentrum.cz/perun-gui-cert/ Perun VO] management server developed and maintained by CESNET is used to provide user management capabilities for OpenNebula Resource Centres. It uses locally installed scripts (fully under the control of the Resource Centres in question) to propagate changes in the user pool to all registered Resource Centres. They are required to install and configure (if need be) these scripts and report back to EGI Cloud Federation for registration in Perun. Installation and configuration details are available online in the [https://github.com/EGI-FCTF/fctf-perun EGI-FCTF/fctf-perun github repository].<br />
<br />
Remember that Perun requires '''SSH access''' to your machine, so that it can invoke the scripts and push user account changes to your site!<br />
<br />
==== Manual account management ====<br />
<br />
If you want to use X.509/VOMS authentication for your users, you need to create users in OpenNebula with the X.509 driver. For a user named 'johnsmith' from the <code>fedcloud.egi.eu</code> VO the command may look like this <br />
<br />
$ oneuser create johnsmith "/DC=es/DC=irisgrid/O=cesga/CN=johnsmith/VO=fedcloud.egi.eu/Role=NULL/Capability=NULL" --driver x509<br />
<br />
*And its properties:<br />
<br />
$ oneuser update &lt;id_x509_user&gt;<br />
X509_DN="/DC=es/DC=irisgrid/O=cesga/CN=johnsmith"<br />
<br />
=== EGI Accounting ===<br />
<br />
See [[Fedcloud-tf:WorkGroups:Scenario4#OpenNebula_Accounting_Scripts|OpenNebula Accounting Scripts]].<br />
<br />
=== EGI Information System ===<br />
<br />
Sites must publish information to EGI information system which is based on BDII. There is a common [https://github.com/EGI-FCTF/cloud-bdii-provider bdii provider] for all cloud management frameworks. Information on installation and configuration is available in the cloud-bdii-provider [https://github.com/EGI-FCTF/cloud-bdii-provider/blob/master/README.md README.md] and in the [[Fedclouds BDII instructions]], there is a [[Fedclouds_BDII_instructions#OpenNebula_.2B_rOCCI|specific section with OpenNebula details]].<br />
<br />
=== EGI Image Management ===<br />
'''Important notice''': the current version of this integration component requires manual intervention from the site administrator when a new appliance/image is registered (NOT on subsequent updates). The site administrator must manually create a Virtual Machine Template and, in this template, reference the image in question by IMAGE and IMAGE_UNAME. This is a temporary workaround and will be removed in the next release of the vmcatcher integration component.<br />
<br />
Sites in FedCloud offering VM management capability must give access to VO-endorsed VM images. This functionality is provided with vmcatcher (that is able to subscribe to the image lists available in AppDB) and a set of tools that are able to push the subscribed images into the glance catalog. In order to subscribe to VO-wide image lists, you need to have a valid access token to the AppDB. Check [https://wiki.appdb.egi.eu/main:faq:how_to_get_access_to_vo-wide_image_lists how to access to VO-wide image lists] and [https://wiki.appdb.egi.eu/main:faq:how_to_subscribe_to_a_private_image_list_using_the_vmcatcher how to subscribe to a private image list] documentation for more information.<br />
<br />
Please refer to [https://github.com/hepix-virtualisation/vmcatcher vmcatcher documentation] for installation. <br />
<br />
[https://github.com/grid-admin/vmcatcher_eventHndlExpl_ON vmcatcher_eventHndlExpl_ON] is a VMcatcher event handler for OpenNebula to store or disable images based on VMcatcher response. The followign guide will show how to install and configure vmCatcher handler as oneadmin user, directly from github. The configuration will automatically syncronize OpenNebula Image datastore with the registered vmcatcher images. <br />
<br />
*Install pre-requisites for VMCatcher handler<br />
<br />
[oneadmin@one-sandbox] sudo yum install -y qemu-img<br />
<br />
*Install VMcatcher handler from github<br />
<br />
[oneadmin@one-sandbox]$ mkdir $HOME/vmcatcher_eventHndlExpl_ON<br />
[oneadmin@one-sandbox]$ cd $HOME/vmcatcher_eventHndlExpl_ON<br />
[oneadmin@one-sandbox]$ wget http://github.com/grid-admin/vmcatcher_eventHndlExpl_ON/archive/v0.0.8.zip -O vmcatcher_eventHndlExpl_ON.zip<br />
[oneadmin@one-sandbox]$ unzip vmcatcher_eventHndlExpl_ON.zip<br />
[oneadmin@one-sandbox]$ mv vmcatcher_eventHndlExpl_ON*/* ./<br />
[oneadmin@one-sandbox]$ rmdir vmcatcher_eventHndlExpl_ON-*<br />
<br />
*Create the vmcatcher folders for ON (do not use /var/lib/one/ or other OpenNebula default directories for the vmcatcher cache, since you cannot import images into OpenNebula from these directories. Also, since this directory will host a copy of all the images downloaded via vmcatcher, it is suggested to place the directory into a separate disk)<br />
<br />
[oneadmin@one-sandbox]$ sudo mkdir -p /opt/vmcatcher-ON/cache /opt/vmcatcher-ON/cache/partial /opt/vmcatcher-ON/cache/expired /opt/vmcatcher-ON/cache/templates<br />
[oneadmin@one-sandbox]$ sudo chown oneadmin:oneadmin -R /opt/vmcatcher-ON<br />
<br />
*Check that vmcatcher is running properly by listing and subscribing to an image list<br />
<br />
[oneadmin@one-sandbox]$ export VMCATCHER_RDBMS="sqlite:////opt/vmcatcher-ON/vmcatcher.db"<br />
[oneadmin@one-sandbox]$ vmcatcher_subscribe -l<br />
[oneadmin@one-sandbox]$ vmcatcher_subscribe -e -s https://vmcaster.appdb.egi.eu/store/vappliance/tinycorelinux/image.list<br />
[oneadmin@one-sandbox]$ vmcatcher_subscribe -l<br />
8ddbd4f6-fb95-4917-b105-c89b5df99dda True None https://vmcaster.appdb.egi.eu/store/vappliance/tinycorelinux/image.list<br />
<br />
*Create a CRON wrapper for vmcatcher, named <code>/var/lib/one/vmcatcher_eventHndlExpl_ON/vmcatcher_eventHndl_ON_cron.sh</code>, using the following code<br />
<br />
#!/bin/bash<br />
#Cron handler for VMCatcher image syncronization script for OpenNebula<br />
<br />
#Vmcatcher configuration variables<br />
export VMCATCHER_RDBMS="sqlite:////opt/vmcatcher-ON/vmcatcher.db"<br />
export VMCATCHER_CACHE_DIR_CACHE="/opt/vmcatcher-ON/cache"<br />
export VMCATCHER_CACHE_DIR_DOWNLOAD="/opt/vmcatcher-ON/cache/partial"<br />
export VMCATCHER_CACHE_DIR_EXPIRE="/opt/vmcatcher-ON/cache/expired"<br />
export VMCATCHER_CACHE_EVENT="python $HOME/vmcatcher_eventHndlExpl_ON/vmcatcher_eventHndl_ON"<br />
<br />
#Update vmcatcher image lists<br />
vmcatcher_subscribe -U<br />
<br />
#Add all the new images to the cache<br />
for a in `vmcatcher_image -l | awk '{if ($2==2) print $1}'`; do<br />
vmcatcher_image -a -u $a<br />
done<br />
<br />
#Update the cache<br />
vmcatcher_cache -v -v<br />
<br />
*Test that the vmcatcher handler is working correctly by running<br />
<br />
[oneadmin@one-sandbox]$ chmod +x $HOME/vmcatcher_eventHndlExpl_ON/vmcatcher_eventHndl_ON_cron.sh<br />
[oneadmin@one-sandbox]$ $HOME/vmcatcher_eventHndlExpl_ON/vmcatcher_eventHndl_ON_cron.sh<br />
INFO:main:Defaulting actions as 'expire', and 'download'.<br />
DEBUG:Events:event 'ProcessPrefix' executed 'python /var/lib/one/vmcatcher_eventHndlExpl_ON/vmcatcher_eventHndl_ON'<br />
DEBUG:Events:stdout=<br />
DEBUG:Events:stderr=2014-07-16 12:25:49,586; DEBUG; vmcatcher_eventHndl_ON; main -- Processing event 'ProcessPrefix'<br />
2014-07-16 12:25:49,586; WARNING; vmcatcher_eventHndl_ON; main -- Ignoring event 'ProcessPrefix'<br />
<br />
INFO:DownloadDir:Downloading '541b01a8-94bd-4545-83a8-6ea07209b440'.<br />
DEBUG:Events:event 'AvailablePrefix' executed 'python /var/lib/one/vmcatcher_eventHndlExpl_ON/vmcatcher_eventHndl_ON'<br />
DEBUG:Events:stdout=<br />
DEBUG:Events:stderr=2014-07-16 12:26:00,522; DEBUG; vmcatcher_eventHndl_ON; main -- Processing event 'AvailablePrefix'<br />
2014-07-16 12:26:00,522; WARNING; vmcatcher_eventHndl_ON; main -- Ignoring event 'AvailablePrefix'<br />
<br />
INFO:CacheMan:moved file 541b01a8-94bd-4545-83a8-6ea07209b440<br />
DEBUG:Events:event 'AvailablePostfix' executed 'python /var/lib/one/vmcatcher_eventHndlExpl_ON/vmcatcher_eventHndl_ON'<br />
DEBUG:Events:stdout=<br />
DEBUG:Events:stderr=2014-07-16 12:26:00,567; DEBUG; vmcatcher_eventHndl_ON; main -- Processing event 'AvailablePostfix'<br />
2014-07-16 12:26:00,567; DEBUG; vmcatcher_eventHndl_ON; HandleAvailablePostfix -- Starting HandleAvailablePostfix for '541b01a8-94bd-4545-83a8-6ea07209b440'<br />
2014-07-16 12:26:00,571; INFO; vmcatcher_eventHndl_ON; UntarFile -- /opt/vmcatcher-ON/cache/541b01a8-94bd-4545-83a8-6ea07209b440 is an OVA file. Extracting files...<br />
2014-07-16 12:26:00,599; INFO; vmcatcher_eventHndl_ON; UntarFile -- Converting /opt/vmcatcher-ON/cache/templates/541b01a8-94bd-4545-83a8-6ea07209b440/CoreLinux-disk1.vmdk to raw format.<br />
2014-07-16 12:26:00,641; INFO; vmcatcher_eventHndl_ON; UntarFile -- New RAW image created: /opt/vmcatcher-ON/cache/templates/541b01a8-94bd-4545-83a8-6ea07209b440/CoreLinux-disk1.vmdk.raw<br />
2014-07-16 12:26:00,642; INFO; vmcatcher_eventHndl_ON; HandleAvailablePostfix -- Creating template file /opt/vmcatcher-ON/cache/templates/541b01a8-94bd-4545-83a8-6ea07209b440.one<br />
2014-07-16 12:26:00,780; INFO; vmcatcher_eventHndl_ON; getImageListXML -- Getting image list: oneimage list --xml<br />
2014-07-16 12:26:00,784; INFO; vmcatcher_eventHndl_ON; HandleAvailablePostfix -- There is not a previous image with the same UUID in the OpenNebula infrastructure<br />
2014-07-16 12:26:00,785; INFO; vmcatcher_eventHndl_ON; HandleAvailablePostfix -- Instantiating template: oneimage create -d default /opt/vmcatcher-ON/cache/templates/541b01a8-94bd-4545-83a8-6ea07209b440.one | cut -d ':' -f 2<br />
<br />
DEBUG:Events:event 'ProcessPostfix' executed 'python /var/lib/one/vmcatcher_eventHndlExpl_ON/vmcatcher_eventHndl_ON'<br />
DEBUG:Events:stdout=<br />
DEBUG:Events:stderr=2014-07-16 12:26:01,077; DEBUG; vmcatcher_eventHndl_ON; main -- Processing event 'ProcessPostfix'<br />
2014-07-16 12:26:01,077; WARNING; vmcatcher_eventHndl_ON; main -- Ignoring event 'ProcessPostfix'<br />
<br />
*Add the following line to the oneadmin user crontab:<br />
<br />
50 */6 * * * $HOME/vmcatcher_eventHndlExpl_ON/vmcatcher_eventHndl_ON_cron.sh &gt;&gt; /var/log/vmcatcher.log 2&gt;&amp;1<br />
<br />
''NOTES:'' <br />
<br />
*vmcatcher_cache must be executed as oneadmin user. <br />
*Environment variables can be used to set default values but the command line options will override any set environment options. Set these env variables for oneadmin user: VMCATCHER_RDBMS, VMCATCHER_CACHE_DIR_CACHE, VMCATCHER_CACHE_DIR_DOWNLOAD, VMCATCHER_CACHE_DIR_EXPIRE and VMCATCHER_CACHE_EVENT. <br />
*vmcatcher_eventHndlExpl_ON generates ON image templates. These templates are available from $VMCATCHER_CACHE_DIR_CACHE/templates (templates nomenclature $VMCATCHER_EVENT_DC_IDENTIFIER.one) <br />
*The new ON images include ''VMCATCHER_EVENT_DC_IDENTIFIER = &lt;VMCATCHER_UUID&gt;'' tag. This tag is used to identify Fedcloud VM images. <br />
*VMcatcher expired images are set as disabled by ON. It is up to the RC to remove disabled images or assign the new ones to a specific ON group or user.<br />
<br />
=== Registration of services in GOCDB ===<br />
<br />
Site cloud services must be registered in [https://goc.egi.eu EGI Configuration Management Database (GOCDB)]. If you are creating a new site for your cloud services, check the [[PROC09|PROC09 Resource Centre Registration and Certification]] procedure. Services can also coexist within an existing (grid) site.<br />
<br />
If offering OCCI interface, sites should register the following services:<br />
* eu.egi.cloud.vm-management.occi for the OCCI endpoint offered by the site. Please note the special endpoint URL syntax described at <br />
[[Federated_Cloud_Technology#eu.egi.cloud.vm-management.occi|GOCDB usage in FedCloud]]<br />
* eu.egi.cloud.accounting (host should be your OCCI machine)<br />
* eu.egi.cloud.vm-metadata.vmcatcher (also host is your OCCI machine)<br />
* Site should also declare the following properties using the ''Site Extension Properties'' feature:<br />
*# Max number of virtual cores for VM with parameter name: <code>cloud_max_cores4VM</code> <br />
*# Max amount of RAM for VM with parameter name: <code>cloud_max_RAM4VM</code> using the format: value+unit, e.g. "16GB".<br />
*# Max amount of storage that could be mounted in a VM with parameter name: <code>cloud_max_storage4VM</code> using the format: value+unit, e.g. "16GB".<br />
<br />
Once the site services are registered in GOCDB and set as monitored they will be checked by the [https://cloudmon.egi.eu/nagios Cloud SAM instance].<br />
<br />
== Installation Validation ==<br />
<br />
You can check your installation following these steps: <br />
<br />
*Check in [https://cloudmon.egi.eu/nagios Cloudmon] that your services are listed and are passing the tests. If all the tests are OK, your installation is already in good shape. <br />
*Check that you are publishing cloud information in your site BDII:<br><code>ldapsearch -x -h &lt;site bdii host&gt; -p 2170 -b Glue2GroupID=cloud,Glue2DomainID=&lt;your site name&gt;,o=glue</code><br />
*Check that all the images listed in the [https://appdb.egi.eu/store/vo/fedcloud.egi.eu AppDB&nbsp;page for fedlcoud.egi.eu VO&nbsp; ]are listed in your BDII. This sample query will return all the template IDs registered in your BDII:<br><code>ldapsearch -x -h &lt;site bdii host&gt; -p 2170 -b Glue2GroupID=cloud,Glue2DomainID=&lt;your site name&gt;,o=glue objectClass=GLUE2ApplicationEnvironment GLUE2ApplicationEnvironmentRepository</code><br />
*Try to start one of those images in your cloud. You can do it with `onetemplate instantiate` or OCCI commands, the result should be the same.<br />
*Execute the [[HOWTO04_Site_Certification_Manual_tests#Check_the_functionality_of_the_cloud_elements|site certification manual tests]] against your endpoints.<br />
*Check in the [http://accounting-devel.egi.eu/cloud.php accounting portal] that your site is listed and the values reported look consistent with the usage of your site.<br />
<br />
[[Category:Operations_Manuals]]</div>
Patrykl
https://wiki.egi.eu/w/index.php?title=Long-tail_of_science&diff=86804
Long-tail of science
2016-04-01T10:36:56Z
<p>Patrykl: </p>
<hr />
<div>{{Template:Op menubar}} {{TOC_right}} <br />
<br />
'''This page provides information about the '[http://access.egi.eu EGI platform for the Long-tail of science]'. The long-tail of science refers to the individual researchers and small laboratories who - opposed to large, expensive collaborations - do not have access to computational resources and online services to manage and analyse large amount of data. This EGI platform allows individual researchers and small research teams to perform compute and data-intensive simulations on large, distributed networks of computers in a user friendly way. If you are interested in the project that developed and now maintains the platform, please jump to the [[Long-tail_of_science_project|Long-tail of science project]] page.<br />
'''<br />
<br />
<br />
= Information for users =<br />
<br />
== What can you access in the platform? ==<br />
<br />
The platform is accessible through [http://access.egi.eu this portal] and offers grid, cloud and application services from across the EGI community for individual researchers and small research teams. The platform offers the following type of resources: <br />
<br />
*High-throughput computing sites for running compute/data-intensive jobs <br />
*Cloud sites suited for both compute/data intensive jobs and hosting of scientific services <br />
*Storage resources for storing job input and output data, and for setting up data catalogues <br />
*Science gateways that provide graphical web environments for building and executing applications in the platform. <br />
*Applications that are made available ‘as services’ through the science gateways.<br />
<br />
Current available resources in the platform: &nbsp;<br> <br />
<br />
{| width="60%" border="1" cellpadding="1" cellspacing="1"<br />
|-<br />
| Type <br />
| Name <br />
| Description<br />
|-<br />
| Cloud and storage site <br />
| INFN Catania Openstack site<br />
| INFN-CATANIA-STACK site capacity: <br />
*20 vCPUs <br />
*50GB RAM <br />
*10 floating IPs <br />
*10TB storage&nbsp;<br />
|-<br />
| Cloud and storage site <br />
| RECAS Bari Openstack site<br />
| RECAS-BARI site capacity: <br />
*15 vCPUs <br />
*30GB RAM <br />
*1TB storage&nbsp;<br />
|-<br />
| High-Throughput Compute and Storage site<br />
| INFN Catania gLite site <br />
| GILDA-INFN-CATANIA site capacity: <br />
*1M HEPSPEC-hours<br />
*30GB of /opt/exp_soft <br />
*10GB RAM<br />
*100GB of opportunistic disk storage<br />
|-<br />
| High-Throughput Compute and Storage site<br />
| INFN BARI gLite site <br />
| INFN-BARI site capacity: <br />
*0.5M HEPSPEC-hours <br />
*2GB RAM per core<br />
*100GB of opportunistic disk storage<br />
|-<br />
| High-Throughput Compute and Storage site<br />
| ACC CYFRONET AGH gLite site <br />
| CYFRONET-LCG2 site capacity: <br />
*50 CPU cores <br />
*50 GB of /opt/exp_soft <br />
*3 GB RAM per core<br />
*500 GB of opportunistic disk storage<br />
|-<br />
| Science gateway <br />
| Catania Science Gateway <br />
| The [https://www.catania-science-gateways.it/home Catania Science Gateway] is a new generation of Science Gateway based on standard that changes the way e-Infrastructures are used. The gateway incorporates several scientific applications and offers these ‘as services’ for the user.<br />
|-<br />
| Science gateway <br />
| WS-PGRADE <br />
| The [https://guse.sztaki.hu/ WS-PGRADE Portal] (Web Services Parallel Grid Runtime and Developer Environment Portal) is the Liferay-based web portal (WS-PGRADE web application) of gUSE, wich also includes a graphical portal service. WS-PGRADE is a Web portal hosted in a standard portal framework, using the client APIs of gUSE services to turn user requests into sequences of gUSE specific Web service calls.<br />
|-<br />
| Application <br />
| Hello World <br />
| Hello World is a simple grid-based application that demonstrates the use of remote resources by printing the hostname where the job is executed. It is accessible through the Catania Science Gateway.<br />
|-<br />
| Application<br />
| The Statistical R <br />
| The [https://www.r-project.org/ Statistical R] is a language and environment for statistical computing and graphics. It is accessible through the Catania Science Gateway.<br />
|- <br />
| Application<br />
| Chipster<br />
| [http://chipster.csc.fi/ Chispter] is a user-friendly analysis software fro high-throughput data. It contains over 300 analysis tools for next generation sequencing (NGS), microarray, proteomics and sequence data. Users can save and share automatic analysis workflows, and visualize data interactively using a built-in genome browser and many other visualizations.<br />
|-<br />
| Application<br />
| ClustalW2<br />
| [http://www.clustal.org/clustal2/ ClustalW2] is a multiple sequence alignment tool for the alignment of DNA or protein sequences. <br />
|-<br />
| Application <br />
| The Semantic Search Engine (SSE) <br />
| SSE is a framework conceived to demonstrate the potential of information coupled with semantic web technologies to address the issues of data discovery and correlation. It is accessible through the Catania Science Gateway.<br />
|}<br />
<br />
'''Would you like to access an application, gateway or resource in the platform that is not yet integrated? Request it in email at long-tail-support@mailman.egi.eu!'''<br />
<br />
== Who can access the platform? ==<br />
<br />
The platform is open for any researcher who needs a simple and user-friendly access to compute, storage and applications services in order to carry out data/compute intensive science and innovation. You need to be affiliated with, or at least have a partner (for example a referee), at a European research institution to qualify for access. The platform is designed to meet the needs of individual researchers and small research groups who have limited or no experience with distributed and cloud computing. <br />
<br />
== How can you access the platform? ==<br />
<br />
# Login to the [http://access.egi.eu entry portal] with an EGI SSO, Google or Facebook account. <br />
# Provide information on your profile page about your affiliation to a research institute or team. <br />
# Request resources from the platform: Indicate what you would like to achieve with the resources so we can help you find the most suitable ones. <br />
# After your request is approved, login to any of the science gateways and build or execute compute/data intensive applications.<br />
<br />
== Presentations about the platform ==<br />
* Overview of the EGI Platform for the long tail of science (EGI Community Forum, November 2015): [https://indico.egi.eu/indico/contributionDisplay.py?contribId=83&confId=2544]<br />
* Poster and animated slides from Demo at EGI Community Forum, November 2015 (Winner of best demo prize): [https://indico.egi.eu/indico/contributionDisplay.py?contribId=124&confId=2544]<br />
* Slideset about the concept of the EGI long-tail of science platform (from Nov. 2014): [https://documents.egi.eu/document/2358]<br />
* Slideset about the authentication and authorization model adopted (from Nov. 2015): [https://documents.egi.eu/document/2363]<br />
<br />
= Guide for providers =<br />
<br />
* Science gateway and resource providers must accept and follow the platform security policy: https://wiki.egi.eu/wiki/SPG:Drafts:LToS_Service_Scoped_Security_Policy<br />
* Science gateway providers must integrate with the User Registration Portal to enable the single sign-on capability for users.<br />
* Science gateway providers must integrate with the per-user subproxy solution to offer traceable user authentication towards the e-infrastructure VO.<br />
* Science gateways must implement user resource usage quota (to prohibit a user consuming all the resources from the platform). <br />
* Resource providers must support the per-user subproxy solution and join the e-infrastructure VO.<br />
<br />
The below subsections provide guidance to complete these steps. <br />
<br />
== How to connect a science gateway to the platform ==<br />
<br />
=== Connecting the science gateway with the User Registration Portal ===<br />
<br />
<br>'''Client service Registration'''<br><br>1. Open the GGUS ticket to operations that include return URIs<br><br>2. UNITY team send Client clientID and secretKey<br> <br />
<div class="moz-forward-container" style="font-size: 13.28px; line-height: 19.92px;"><br><div dir="ltr"><div><div>'''Authorization procedure Unity with Client:'''</div><div>'''<br>'''</div><div>1] The Client sends a request to the OpenID Provider</div><div><br></div><div>address:&nbsp;[https://unity.egi.eu/oauth2-as/oauth2-authz https://unity.egi.eu/oauth2-as/oauth2-authz]</div><div><br></div><div>parameters:</div><div>response_type:code</div><div>redirect_uri:&nbsp;[[Redirect url]]</div><div>client_id:unity-oauth-egrant</div><div>state:&nbsp;[[You should generate your own state eg. md5(uniqid(rand(), TRUE));]]</div><div>scope:profile openid&nbsp;</div><div><br></div><div>example:</div><div>&nbsp;&nbsp;[https://unity.egi.eu/oauth2-as/oauth2-authz https://unity.egi.eu/oauth2-as/oauth2-authz]</div><div>&nbsp; &nbsp; response_type=code</div><div>&nbsp; &nbsp; &amp;client_id=123123123</div><div>&nbsp; &nbsp; &amp;redirect_uri=https%3A%2F%2Fclient.pl%2Fauth</div><div>&nbsp; &nbsp; &amp;scope=openid%20profile</div><div>&nbsp; &nbsp; &amp;state=a123a123a123</div><div><br></div><div><br></div><div>2] Authorization Server authenticates the End-User.</div><div>3] Authorization Server obtains End-User Consent/Authorization.</div><div>4] Authorization Server sends the End-User back to the redirect uri from the first request ([[Redirect url]]) with code.</div><div><br></div><div>example of the response</div><div><br></div><div>Location:&nbsp;[https://client.pl/auth https://client.pl/auth]?</div><div>&nbsp; &nbsp; code=uniquecode123</div><div>&nbsp; &nbsp; &amp;state=a123a123a123</div><div><br></div><div><br></div><div><br></div><div>5] Client sends the code to the Token Endpoint to receive an Access Token and ID Token in the response.</div><div><br></div><div>POST /token HTTP/1.1</div><div>&nbsp; Host:&nbsp;[http://client.pl/ client.pl]</div><div>&nbsp; Authorization: Basic czZCaGRSa3F0MzpnWDFmQmF0M2JW</div><div>&nbsp; Content-Type: application/x-www-form-urlencoded</div><div><br></div><div>&nbsp; grant_type=authorization_code&amp;code=uniquecode123</div><div>&nbsp; &nbsp; &amp;redirect_uri=https%3A%2F%2Fclient.pl%2Fauth</div><div><br></div><div><br></div><div><br></div><div><br></div><div>6] Client validates the tokens and retrieves the End-User's Subject Identifier.</div><div><br></div><div>example:</div><div><br></div><div>&nbsp; HTTP/1.1 200 OK</div><div>&nbsp; Content-Type: application/json</div><div>&nbsp; Cache-Control: no-store</div><div>&nbsp; Pragma: no-cache</div><div>&nbsp; {</div><div>&nbsp; &nbsp;"access_token":"accessToken123",</div><div>&nbsp; &nbsp;"token_type":"Bearer",</div><div>&nbsp; &nbsp;"expires_in":3600,</div><div>&nbsp; &nbsp;"refresh_token":"refreshToken123",</div><div>&nbsp; &nbsp;"id_token":"idToken123123"</div><div>&nbsp; }</div><div><br></div><div>You should decode id_token and make some validation (more information:&nbsp;[http://openid.net/specs/openid-connect-basic-1_0.html http://openid.net/specs/openid-connect-basic-1_0.html])</div><div><br></div><div><br></div><div>7] Client Gets some information from userpoint endpoint ([https://unity.egi.eu/oauth2/userinfo https://unity.egi.eu/oauth2/userinfo])</div><div><br></div><div>example</div><div>[https://unity.egi.eu/oauth2/userinfo?schema=openid&access_token=accessToken123 https://unity.egi.eu/oauth2/userinfo?schema=openid&amp;access_token=accessToken123]"</div><div><br></div><div><br></div><div>8] User gets information about user such as email or name in json format</div><div><br></div><div><br></div><div><br></div><div>important data:</div><div>unity.server.clientId=&nbsp; [YOUR CLIENT ID]<br>unity.server.clientSecret= [YOUR SECRET KEY]<br></div><div>unity.server.authorize=[https://unity.egi.eu/oauth2-as/oauth2-authz https://unity.egi.eu/oauth2-as/oauth2-authz]</div><div>unity.server.token=[https://unity.egi.eu/oauth2/token https://unity.egi.eu/oauth2/token]</div><div>unity.server.base=[https://unity.egi.eu/ https://unity.egi.eu]</div><div><br></div><div>full configuration:</div><div>[https://unity.egi.eu/oauth2/.well-known/openid-configuration https://unity.egi.eu/oauth2/.well-known/openid-configuration]</div></div><div><br></div><br />
<br />
==== OpenId Connect for Liferay ====<br />
OpenId Connect for Liferay is a very rough but effective implementation of the OpenId connect protocol for Liferay.<br />
Use this [https://github.com/csgf/OpenIdConnectLiferay module] to authenticate with any OpenId Connect provider.<br />
<br />
=== Connecting the science gateway with per-user subproxies ===<br />
<br />
The platform uses [[Long-tail_of_science#Per-user_sub-proxies|per-user subproxies]] (PSUPs) for user authentication. Any connected science gateway must generate per-user sub proxies for their users and must use these for any interaction with VO resources on behalf of the users. A gateway can generate PSUPs in two ways: <br />
#. From a robot certificate that is physically hosted on the gateway server itself. OR<br />
#. From a remote robot certificate that is hosted in the e-Token Server by INFN Catania. <br />
We recommend you to choose the first option. If there is an IGTF CA in your country which issues robot certificates then obtain a robot from this CA. If such robots are not available in your country or region, then EGI can issue a robot for you from the SEEGRID catch-all CA. The next subsections provide details information to complete these steps.<br />
<br />
==== Generic requirements ====<br />
<br />
The Per-User Sub-Proxy (PUSP) and End-Entity Certificate (EEC) must satisfy the following requirements:<br />
<br />
<ol><br />
<li> The EEC is a valid robot certificate:<br />
<ul><br />
<li> it either contains OID 1.2.840.113612.5.2.3.3.1, see https://www.eugridpma.org/objectid/?oid=1.2.840.113612.5.2.3.3.1<br />
<li> or its DN matches the regular expression "<tt>/CN=[rR]obot[^/[:alnum:]]</tt>" i.e. containing a CN field which starts with ''robot'' or ''Robot'' and is followed by a non-alphanumerical non-slash character. see https://www.eugridpma.org/guidelines/robot/ section 3.<br />
</ul><br />
<li> The PUSP is RFC 3820 compliant, i.e. no legacy GT2 or GT3 proxies<br />
<li> The PUSP is the first proxy delegation<br />
<li> If the same user enters via the same portal, he must get the same PUSP DN<br />
<li> No two distinct identified users will have the same PUSP DN.<br />
</ol><br />
<br />
A robot EEC that generates PUSP credentials SHOULD NOT be used for any other purpose; for example, it should not be used to generate non-PUSP proxy credentials and should not be use for direct authenticating.<br />
<br />
The machine/service that will take care of PUSP generation and management should respect the following rules:<br />
<ol><br />
<li> Documented response procedures in case of incidents (that are periodically tested).<br />
<li> A listed/accredited CSIRT team.<br />
<li> Internal risk assessment and an actuarial team to calculate the effective risk<br />
</ol><br />
<br />
====Using a robot certificate from your national IGTF CA====<br />
# Obtain a robot certificate from your national IGTF Certification Authority following the instructions [http://www.egi.eu/how-to/get_a_certificate.html here]. <br />
# Register the robot in the vo.access.egi.eu VO: https://perun.metacentrum.cz/cert/registrar/?vo=vo.access.egi.eu<br />
# Generate proxies from the robot using this script: https://ndpfsvn.nikhef.nl/viewvc/mwsec/trunk/lcmaps-plugins-robot/tools/<br />
<br />
====Obtaining a robot certificate from EGI catch-all CA====<br />
# Contact long-tail-support@mailman.egi.eu and send a short description of your gateway service and the way it would be integrated with platform resources. The team will arrange a robot certificate for your gateway from the SEEGRID CA (which operates as a 'catch-all' CA in EGI) and will register this in the VO and in the e-Token Server in Italy. <br />
# Register the robot in the vo.access.egi.eu VO: https://perun.metacentrum.cz/cert/registrar/?vo=vo.access.egi.eu<br />
# Generate proxies from the robot using this script: https://ndpfsvn.nikhef.nl/viewvc/mwsec/trunk/lcmaps-plugins-robot/tools/<br />
<br />
====Instructions to use the e-Token Server====<br />
# Contact long-tail-support@mailman.egi.eu and send a short justification why you would like to use the eToken server (instead of hosting the robot certificate locally). Describe your gateway service and the way it would be integrated with platform resources. The team will arrange a robot certificate for your gateway from the SEEGRID CA and will register it in vo.access.egi.eu.<br />
# Provide long-tail-support@mailman.egi.eu with the static IP address of your gateway server, so proxy requests can be authorized from this address on the e-Token Server. <br />
# Generate proxies from the e-Token server following this guideline: <br />
<br />
There are two available e-Token Server instances for availability and reliability reasons:<br />
* etokenserver.ct.infn.it<br />
* etokenserver2.ct.infn.it<br />
<br />
The following rest API is available to get a PUSP given a unique identifier:<br />
<PRE><br />
https://[eToken Server instance]:8443/eTokenServer/eToken/[Robot Certificate ID]?voms=[VO]:/[VO]&proxy-renewal=[true|false]&disable-voms-proxy=[true|false]&rfc-proxy=[true|false]&cn-label=user:[user unique identifier]<br />
</PRE><br />
<br />
* '''Robot cetificate ID''': it is the ID of your robot certificate in the e-Token server. It will be generated after the setup of your robot into the e-Token Server.<br />
* '''VO''': the VO you want to use to perform any action on the EGI infrastructure. The robot certificate must be a member of this VO.<br />
* '''proxy-renewal''': this option is used to enable (true) or disable (false) the automatic registration of a long-term proxy into a MyProxy Server.<br />
* '''disable-voms-proxy''': this option is used to generate plain (true) or VOMS proxy certificate (false).<br />
* '''rfc-proxy''': this option is used to generate standard RFC proxies (true) or legacy proxies (false).<br />
* '''cn-label''': this option is used to generate a PUSP for the given unique identifier.<br />
<br />
below an example:<br />
<PRE><br />
https://[eToken Server instance]:8443/eTokenServer/eToken/27br90771bba31acb942efe4c8209e69?voms=training.egi.eu:/training.egi.eu&proxy-renewal=false&disable-voms-proxy=false&rfc-proxy=true&cn-label=user:test1<br />
</PRE><br />
<br />
=== Connecting the gateway with the EGI monitoring system ===<br />
...<br />
<br />
== How to join as a resource provider ==<br />
<br />
<span style="font-size: 13.28px; line-height: 19.92px; font-weight: normal;">Any EGI</span><span style="font-size: 13.28px; font-weight: normal; line-height: 1.5em;">&nbsp;</span><span style="line-height: 19.92px; font-size: 13.28px; font-weight: normal; background-color: initial;">resource provider can join the platform to offer capacity for members of the long-tail of science. The site needs to run one of the supported grid or cloud middleware software, enable per-user sub-proxies (for user authentication and authorisation), and join the [http://operations-portal.egi.eu/vo/view/voname/vo.access.egi.eu vo.access.egi.eu Virtual Organisation]. The next subsections provide instructions on how to enable per-user sub-proxies on EGI</span><span style="font-size: 13.28px; font-weight: normal; line-height: 1.5em;">&nbsp;</span><span style="font-size: 13.28px; font-weight: normal; line-height: 19.92px; background-color: initial;">sites. Please email long-tail-support@egi.eu if you wish to join as a resource provider.</span><br />
<br />
In order to provide authorization to the users of the LToS VO, a couple of DNs (Distinghished Names) are required to be configured on the services to be enabled. For instance, for the CREAM CE the usual grid-mapfile is the place where to add them, for OpenStack it's /etc/keystone/voms.json. You can find below the instructions for each service. <br />
<br />
The following Robot Certificate DNs must be configured: <br />
<br />
<pre>/DC=EU/DC=EGI/C=HU/O=Robots/O=MTA SZTAKI/CN=Robot:zfarkas@sztaki.hu<br />
/C=IT/O=INFN/OU=Robot/L=Catania/CN=Robot: Catania Science Gateway - Roberto Barbera</pre> <br />
<br />
=== Instructions for OpenStack providers ===<br />
<br />
Keystone-VOMS has support for per-user subproxies in the special branch called <code>subproxy_support</code> available in the github repository https://github.com/enolfc/keystone-voms (code is in progress of being integrated into the main branch of Keystone-VOMS). You can install the code from the repository following these instructions: <br />
<pre> git clone -b subproxy_support https://github.com/enolfc/keystone-voms.git<br />
cd keystone-voms<br />
pip install .<br />
</pre> <br />
Configuration and deployment of the plugin does not change from the normal Keystone-VOMS plugin, follow the [https://keystone-voms.readthedocs.org/en/latest/ Keystone-VOMS documentation] to deploy it. <br />
<br />
There are new parameters to configure in your keystone config file, under the <code>[voms]</code> section: <br />
<br />
*<code>allow_subproxy</code>, should be set to <code>True</code> for enabling PUSP support. <br />
*<code>subproxy_robots</code>, should be set to <code>*</code> (recommended) or to a list of the DNs that are allowed to create PUSP in the system. <br />
*<code>subproxy_user_prefix</code>, determines the expected prefix for the PUSP user specification. It is safe to leave it undefined so the default value (<code>CN=eToken</code> is used.<br />
<br />
=== Instruction for gLite providers ===<br />
<br />
There is an EGI manual that shows how to set up a per-user sub-proxy to allow identification of the individual users under a common robot certificate. You can find the guide here: https://wiki.egi.eu/wiki/MAN12<br />
<br />
=== Instruction for OpenNebula providers ===<br />
<br />
OpenNebula sites are not yet supported in the platform.<br />
<br />
== How to join the user support team ==<br />
<br />
If you wish to support platform users from your country, region or scientific disciplinary area, then please email long-tail-support@egi.eu. We can train you and then register you as a supporter in our team. <br />
<br />
<br><br />
<br />
= Technical and architecture details =<br />
<br />
== User Registration Portal ==<br />
The User Registration Portal of the platform is hosted by CYFRONET in Poland and serves as the entry point for users. The portal offers login with social or EGI SSO accounts, and allow users to manage their profiles, resource requests and a central hub to access the connected science gateways. The portal is used by the user support team to review user profiles and to evaluate the users' resource requests. The portal is accessible at http://access.egi.eu.<br />
<br />
== Virtual Organisation ==<br />
<br />
The HTC, cloud and storage resources of the platform are federated through the 'vo.access.egi.eu' Virtual Organisation of EGI (VO). Technical details of this VO are the following: <br />
* ID Card in the EGI Operations Portal: http://operations-portal.egi.eu/vo/view/voname/vo.access.egi.eu<br />
* Name: vo.access.egi.eu<br />
* Scope: Global<br />
* Homepage URL: https://wiki.egi.eu/wiki/Long-tail_of_science<br />
* Acceptable use policy for users: https://documents.egi.eu/document/2635<br />
* Discipline: Support Activities<br />
* VO Membership management: VOMS+PERUN<br />
** perun.cesnet.cz. The enrollment url is https://perun.metacentrum.cz/perun-registrar-cert/?vo=vo.access.egi.eu<br />
** voms1.grid.cesnet.cz and voms2.grid.cesnet.cz <br />
* Contacts: <br />
** <long-tail-support@mailman.egi.eu> for all support issues.<br />
** Managers: Gergely.Sipos@egi.eu, Diego.Scardaci@egi.eu, Peter.Solagna@egi.eu<br />
<br />
== Per-user sub-proxies ==<br />
The purpose of a '''per-user sub-proxy (PUSP)''' is to allow identification of the individual users that operate using a common robot certificate. A common example is where a web portal (e.g., a scientific gateway) somehow identifies its user and wishes to authenticate as that user when interacting with EGI resources. This is achieved by creating a proxy credential from the robot credential with the proxy certificate containing user-identifying information in its additional proxy CN field. The user-identifying information may be pseudo-anonymised where only the portal knows the actual mapping.<br />
<br />
Example of a Per-User Sub-Proxy (PUSP):<br />
<PRE><br />
subject : /C=IT/O=INFN/OU=Robot/L=Catania/CN=Robot: EGI Training Service - XXXXX/CN=user:test1/CN=1286259828<br />
issuer : /C=IT/O=INFN/OU=Robot/L=Catania/CN=Robot: EGI Training Service - XXXXX/CN=user:test1<br />
identity : /C=IT/O=INFN/OU=Robot/L=Catania/CN=Robot: EGI Training Service - XXXXX<br />
type : RFC3820 compliant impersonation proxy<br />
strength : 1024<br />
path : /home/XXXXX/proxy.txt<br />
timeleft : 23:59:15<br />
key usage : Digital Signature, Key Encipherment, Data Encipherment<br />
=== VO training.egi.eu extension information ===<br />
VO : training.egi.eu<br />
subject : /C=IT/O=INFN/OU=Robot/L=Catania/CN=Robot: EGI Training Service - XXXXX<br />
issuer : /DC=org/DC=terena/DC=tcs/OU=Domain Control Validated/CN=voms1.grid.cesnet.cz<br />
attribute : /training.egi.eu/Role=NULL/Capability=NULL<br />
timeleft : 23:59:17<br />
uri : voms1.grid.cesnet.cz:15014<br />
</PRE><br />
<br />
== E-Token Server ==<br />
The platform adopted the '''e-Token server''' [1] as a central service to generate PUSPs for science gateways. In a nutshell the e-Token server is a standard-based solution developed by and hosted in INFN Catania for central management of robot certificates and provisioning of digital, short-term proxies from these, allowing seamless and secure access to e-Infrastructures with X.509-based Authorisation layer.<br />
<br />
The e-Token server uses the standard JAX-RS framework [2] to implement RESTful Web services in Java technologies and provides, to the end-users, portals and new generation of Science Gateways, a set of REST APIs to generate PUSPs given a unique identifier. PUPS are usually generated starting from standard X.509 certificates. These digital certificates have to be uploaded into one of the secure USB smart cards (e.g. SafeNet Aladdin eToken PRO 32/64 KB) and plugged in the server.<br />
<br />
The e-Token server was conceived for providing a credential translator system to Science Gateways and Web Portals that need to interact with the EGI platform for the long-tail (and in general with any e-Infrastructure).<br />
<br />
[1] Valeria Ardizzone, Roberto Barbera, Antonio Calanducci, Marco Fargetta, E. Ingrà, Ivan Porro, Giuseppe La Rocca, Salvatore Monforte, R. Ricceri, Riccardo Rotondo, Diego Scardaci, Andrea Schenone: The DECIDE Science Gateway. Journal of Grid Computing 10(4): 689-707 (2012)<br />
<br />
[2] Java API for RESTful Web Services (JAX-RS): https://en.wikipedia.org/wiki/Java_API_for_RESTful_Web_Services<br />
<br />
== Policies ==<br />
<br />
* Acceptable Use Policy and Conditions of Use of the EGI Platform for the Long-tail of Science: https://documents.egi.eu/document/2635<br />
* [[SPG:Drafts:LToS Service Scoped Security Policy]]<br />
<br />
== Links for administrators ==<br />
<br />
User approval:<br />
# Approve affiliation: https://access.egi.eu:8888/modules#/list/Affiliations<br />
# Approve resource request: https://e-grant.egi.eu/ltos/auth/login<br />
<br />
Gateway and support approval:<br />
* VO membership management interface in PERUN: https://perun.metacentrum.cz/cert/gui/<br />
* To register in the VO (relevant for gateway robot certificates and for support staff): https://perun.metacentrum.cz/cert/registrar/?vo=vo.access.egi.eu<br />
<br />
Monitoring:<br />
* Detailed accounting data about the VO users can be obtained by the VO managers at https://accounting-devel.egi.eu/user/voadm.php<br />
* To see the list of VO members: https://voms1.grid.cesnet.cz:8443/voms/vo.access.egi.eu/user/search.action<br />
<br />
Accounting:<br />
* Accounting data of platform users: ...<br />
* ...<br />
<br />
= Roadmap =<br />
<br />
{| class="wikitable sortable"<br />
|-<br />
! scope="col" | No <br />
! scope="col" | Task <br />
! scope="col" | Priority<br> <br />
! scope="col" | Responsible <br />
! scope="col" | Start date <br />
! scope="col" | Deadline <br />
! scope="col" | Comment <br />
! scope="col" | STATUS<br />
|-<br />
| <br> <br />
| Definition of the LTOS portal Terms and Conditions <br />
| Medium<br> <br />
| Solagna <br />
| <br> <br />
| 1 April <br />
| <br> <br />
| <br />
|-<br />
| <br> <br />
| Setup of the structures (team, processes,procedures) needed to support the LTOS platform<br> <br />
| Medium<br> <br />
| Solagna <br />
| <br> <br />
| 1 May <br />
| <br> <br />
| <br />
|-<br />
| <br> <br />
| Registration of LTOS components in GOC&nbsp;DB<br> <br />
| High<br> <br />
| Krakowian <br />
| started <br />
| 1 April <br />
| <br />
[https://goc.egi.eu/portal/index.php?Page_Type=Site&id=1565 GRIDOPS-CSGF] <br> <br />
<br />
[https://goc.egi.eu/portal/index.php?Page_Type=Site&id=1525 GRIDOPS-LTOS]<br> <br />
<br />
missing registration of administrators and some additional info for GRIDOPS-LTOS <br />
<br />
| In progress<br />
|-<br />
| <br> <br />
| [https://rt.egi.eu/rt/Ticket/Display.html?id=10485 <span>Agree on OLAs supporting LTOS resources</span>] <br />
| High<br> <br />
| Krakowian<br> <br />
| <br> <br />
| 1 April <br />
| <br> <br />
| In progress<br />
|-<br />
| <br> <br />
| Finalization of the LTOS business model <br> <br />
| Medium<br> <br />
| Solagna<br> <br />
| <br> <br />
| 1 May<br> <br />
| <br> <br />
| <br />
|-<br />
| <br> <br />
| [https://rt.egi.eu/rt/Ticket/Display.html?id=9616 Integrate WS-PGRADE gUSE to LTOS]<br> <br />
| High<br> <br />
| La Rocca<br> <br />
| started<br> <br />
| 1 April<br> <br />
| <br> <br />
https://ggus.eu/index.php?mode=ticket_info&amp;ticket_id=116323 <br />
<br />
| in progress<br> <br />
| <br />
|-<br />
| <br> <br />
| [https://rt.egi.eu/rt/Ticket/Display.html?id=9682 Accounting system integration]<br> <br />
| Medium<br> <br />
| La Rocca <br />
| started <br />
| TBD<br> <br />
| <br> <br />
| In progress<br />
|-<br />
| <br> <br />
| [https://rt.egi.eu/rt/Ticket/Display.html?id=9684 Implementing Roles in the URP]<br> <br />
| Low<br> <br />
| Szepieniec<br> <br />
| <br> <br />
| TBD<br> <br />
| better understand requirement<br> <br />
| <br />
|-<br />
| <br> <br />
| [https://rt.egi.eu/rt/Ticket/Display.html?id=9685 <strike>Instruction for Lifewary providers</strike>]<strike><br></strike> <br />
| <br> <br />
| La Rocca <br />
| started<br> <br />
| finished<br> <br />
| https://github.com/csgf/OpenIdConnectLiferay <br />
| DONE<br />
|-<br />
| <br> <br />
| [https://rt.egi.eu/rt/Ticket/Display.html?id=10164 Space for the resource providers logos]<br> <br />
| Low<br> <br />
| Szepieniec<br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
Logos of NGIs/institutions providing resources for the LToS platform should be added on page [1] (in the bottom). [1] https://access.egi.eu/start <br />
<br />
| <br />
|-<br />
| <br> <br />
| [https://rt.egi.eu/rt/Ticket/Display.html?id=10166 Integration with QCG]<br> <br />
| Medium<br> <br />
| La Rocca<br> <br />
| started<br> <br />
| TBD<br> <br />
| https://ggus.eu/?mode=ticket_info&amp;ticket_id=117764 <br />
| In progress <br />
| <br />
|-<br />
| <br> <br />
| [https://rt.egi.eu/rt/Ticket/Display.html?id=10226 Login modes]<br> <br />
| Medium<br> <br />
| Szepieniec<br> <br />
| <br> <br />
| 1 April explanation <br />
| <br> <br />
| <br />
|-<br />
| <br> <br />
| [https://rt.egi.eu/rt/Ticket/Display.html?id=10228 page not refreshed]<br> <br />
| Medium <br />
| Szepieniec <br />
| <br> <br />
| 1 April explanation <br />
| <br> <br />
| <br />
|-<br />
| <br> <br />
| [https://rt.egi.eu/rt/Ticket/Display.html?id=10229 Rephprase point 3 of "How can you access the platform?"]<br> <br />
| Low<br> <br />
| Szepieniec <br />
| <br> <br />
| TBD <br />
| <br> <br />
| <br />
|-<br />
| <br> <br />
| [https://rt.egi.eu/rt/Ticket/Display.html?id=10230 accepting and rejecting the affiliations]<br> <br />
| Medium<br> <br />
| Szepieniec <br />
| <br> <br />
| 1 April explanation <br />
| <br> <br />
| <br />
|-<br />
| <br> <br />
| [https://rt.egi.eu/rt/Ticket/Display.html?id=10231 information menu]<br> <br />
| Medium<br> <br />
| Szepieniec <br />
| <br> <br />
| 1 May <br />
| <br> <br />
| <br />
|-<br />
| <br> <br />
| [https://rt.egi.eu/rt/Ticket/Display.html?id=10232 General usage policy]<br> <br />
| Medium <br />
| Szepieniec <br />
| <br> <br />
| 1 April<br> <br />
| <br> <br />
| <br />
|-<br />
| <br> <br />
| [https://rt.egi.eu/rt/Ticket/Display.html?id=10233 notifications]<br> <br />
| High<br> <br />
| Szepieniec<br> <br />
| <br> <br />
| 1 April<br> <br />
| <br> <br />
| <br />
|-<br />
| <br> <br />
| [https://rt.egi.eu/rt/Ticket/Display.html?id=10235 Link to www.egi.eu]<br> <br />
| Low<br> <br />
| Szepieniec <br />
| <br> <br />
| 1 May <br />
| access.egi.eu does already contain an EGI logo but the link is wrong. It should point to www.egi.eu instead of https://access.egi.eu/<br> <br />
| In progress<br />
|-<br />
| <br> <br />
| [https://rt.egi.eu/rt/Ticket/Display.html?id=10236 Pre-defined templates for the requests]<br> <br />
| High <br />
| Szepieniec <br />
| <br> <br />
| 1 April <br />
| HTC [Computing] = 10k hours<br> HTC [Storage] = 100 GB of total storage capacity<br> Cloud [Computing] = 10 vCPU cores per hours<br> Cloud [Storage] = 100 GB of storage volume <br />
| In progress<br />
|-<br />
| <br> <br />
| [https://rt.egi.eu/rt/Ticket/Display.html?id=10237 Add contacts for support/requests]<br> <br />
| Low<br> <br />
| Szepieniec <br />
| <br> <br />
| TBD <br />
| <br> <br />
| <br />
|-<br />
| <br> <br />
| [https://rt.egi.eu/rt/Ticket/Display.html?id=10238 Access to general usage policy]<br> <br />
| Medium <br />
| Szepieniec <br />
| <br> <br />
| 1 April <br />
| MK+GLR where to put link <br />
| In progress<br />
|-<br />
| <br> <br />
| [https://rt.egi.eu/rt/Ticket/Display.html?id=10240 Add an institutional email for the communications]<br> <br />
| High<br> <br />
| Peter<br> <br />
| <br> <br />
| TBD <br />
| <br> <br />
| <br />
|-<br />
| <br> <br />
| [https://rt.egi.eu/rt/Ticket/Display.html?id=10241 Users should always be able to go back to the home page]<br> <br />
| Medium <br />
| Szepieniec <br />
| <br> <br />
| 1 June <br />
| <br> <br />
| <br />
|-<br />
| <br> <br />
| [https://rt.egi.eu/rt/Ticket/Display.html?id=10478 <strike>Monitoring of URP</strike>]<br> <br />
| High <br />
| Krakowian<br> <br />
| <br> <br />
| <br />
| http://argo.egi.eu/lavoisier/status_report-site?report=OPS-MONITOR-Critical&amp;accept=html<br> <br />
| DONE<br />
|-<br />
| <br> <br />
| [https://rt.egi.eu/rt/Ticket/Display.html?id=10479 <strike>Monitoring of SGs. Update SG integration doc in Wiki accordingly</strike>]<br> <br />
| High <br />
| Krakowian <br />
| <br> <br />
| <br />
| http://argo.egi.eu/lavoisier/status_report-site?report=OPS-MONITOR-Critical&amp;accept=html<br> <br />
| DONE<br />
|-<br />
| <br> <br />
| [https://rt.egi.eu/rt/Ticket/Display.html?id=10480 Setup GGUS units for trouble tickets]<br> <br />
| High <br />
| Peter<br> <br />
| <br> <br />
| TBD <br />
| <br> <br />
| In progress<br />
|-<br />
| <br> <br />
| [https://rt.egi.eu/rt/Ticket/Display.html?id=10482 Define identity vetting manual for user request approvers]<br> <br />
| High<br> <br />
| La Rocca<br> <br />
| <br> <br />
| TBD <br />
| <br> <br />
| <br />
|-<br />
| <br> <br />
| [https://rt.egi.eu/rt/Ticket/Display.html?id=10483 Sign OLA with URP provider]<br> <br />
| High <br />
| Krakowian<br> <br />
| 21.03<br> <br />
| 1 April<br> <br />
| <br> <br />
| In progress<br />
|-<br />
| <br> <br />
| [https://rt.egi.eu/rt/Ticket/Display.html?id=10484 Sign OLA with SG]<br> <br />
| High <br />
| Krakowian <br />
| 21.03 <br />
| 1 April <br />
| <br> <br />
| IN progress<br />
|-<br />
| <br> <br />
| [https://rt.egi.eu/rt/Ticket/Display.html?id=10486 Document process on how to monitor user-level accounting &amp; how to respond to quota overuse]<br> <br />
| Low<br> <br />
| La Rocca<br> <br />
| <br> <br />
| TBD<br> <br />
| <br> <br />
| <br />
|-<br />
| <br> <br />
| [https://rt.egi.eu/rt/Ticket/Display.html?id=10487 Manage user-level quota inside the SG]<br> <br />
| Low <br />
| La Rocca <br />
| <br> <br />
| TBD <br />
| <br> <br />
| <br />
|-<br />
| <br> <br />
| [https://rt.egi.eu/rt/Ticket/Display.html?id=10488 Define and implement process for downtime notification]<br> <br />
| Medium<br> <br />
| Krakowian<br> <br />
| <br> <br />
| TBD<br> <br />
| <br> <br />
| <br />
|-<br />
| <br> <br />
| [https://rt.egi.eu/rt/Ticket/Display.html?id=10489 <strike>Move the security policy into final document format</strike>]<br> <br />
| High<br> <br />
| Krakowian<br> <br />
| 14.03.2016<br> <br />
| 1 April<br> <br />
| [https://documents.egi.eu/document/2769 https://documents.egi.eu/document/2769] <br />
| DONE<br />
|-<br />
| <br> <br />
| [https://rt.egi.eu/rt/Ticket/Display.html?id=10490 Discuss details of joining with interested sites and SGs]<br> <br />
| High<br> <br />
| La Rocca<br> <br />
| <br> <br />
| TBD<br> <br />
| <br> <br />
| In progress<br />
|-<br />
| <br> <br />
| [https://rt.egi.eu/rt/Ticket/Display.html?id=10491 Involve NGI representatives in request approver team]<br> <br />
| Medium<br> <br />
| Solagna<br> <br />
| <br> <br />
| 1 April<br> <br />
| <br> <br />
| <br />
|-<br />
| <br> <br />
| [https://rt.egi.eu/rt/Ticket/Display.html?id=10492 Adoption of URP to Hungarian Academic Cloud]<br> <br />
| Low<br> <br />
| Sipos<br> <br />
| <br> <br />
| <br> <br />
| <br> <br />
| <br />
|}<br />
<br />
[[Category:Task_forces]]</div>
Patrykl
https://wiki.egi.eu/w/index.php?title=MAN10&diff=85702
MAN10
2016-02-03T22:33:56Z
<p>Patrykl: /* EGI Image Management */</p>
<hr />
<div>{{Template:Op menubar}} {{Template:Doc_menubar}} {{TOC_right}} <br />
<br />
{{Ops_procedures<br />
|Doc_title = Setting up Cloud Resource Centre<br />
|Doc_link = [[MAN09|https://wiki.egi.eu/wiki/MAN10]]<br />
|Version = 19 August 2014<br />
|Policy_acronym = OMB<br />
|Policy_name = Operations Management Board<br />
|Contact_group = operations-support@mailman.egi.eu<br />
|Doc_status = DRAFT<br />
|Approval_date = <br />
|Procedure_statement = This manual provides information on how to set up Cloud Resource Centre.<br />
}} <br />
<br />
<br />
<br />
= Introduction =<br />
<br />
EGI cloud supports 3 middlewares. It means you can base your cloud site installation on one of the following cloud software: <br />
<br />
*OpenNebula <br />
*OpenStack <br />
*Synnefo<br />
<br />
If you want to install an EGI Cloud Site please have a look at our EGI Cloud Site Installation Manuals below. <br />
<br />
''<span style="color: rgb(51,102,255);">Note: EGI Cloud Site Installation Manual is a step-by-step instruction for Cloud Site Admin. The manual is not meant to be a comprehensive on topics related to the installation, it is a collection of steps taken by someone to install an EGI cloud site starting from a scratch. Commands executed should be made available for someone to copy&amp;paste and easily follow up. At some initial stage the manual may not cover all cases, but it is meant to be extended by other site admins while following up the manual. It is a living document.<br />
</span>''<br />
<br> <br />
<br />
= The manuals =<br />
<br />
<span style="color: rgb(51,102,255);">'''''Current issues:'''''</span> <br />
<br />
*<span style="color: rgb(51,102,255);">''Documentation for cloud components is written with assumption that the admin knows where (machine, neighbour components) this components should be installed. It is missing the general cloud site deployment context''</span> <br />
*<span style="color: rgb(51,102,255);">''Documentation should address the prerequsities part.''</span><span style="color: rgb(51,102,255);">&nbsp;</span> <br />
*<span style="color: rgb(51,102,255);">''Documentation should address the constraints and limitations part i.e. supported operating systems, software versions.''</span> <br />
*<span style="color: rgb(51,102,255);">''Documentation should provide a contact person (per component) which can be contacted in case of questions/problems.&nbsp;''</span><span style="color: rgb(51,102,255);">&nbsp;</span> <br />
*<span style="color: rgb(51,102,255);">''Documentation should provide commands for checking validity of installation.''</span><span style="color: rgb(51,102,255);"><br />
</span><br />
<br />
<br />
== Prerequisities &amp; Limitations ==<br />
<br />
Whatever cloud stack you choose you need to prepare some things at the begining: <br />
<br />
#Hardware (minimal hw requirements for small cloud site e.g up to 100 VMs): <br> <br />
##number of physical machines, performance/capacity requirements: RAM size <br />
##disk space - how big, where must be connected, performance of network links (images are heavy!) <br />
#DNS names, X.509 certificates <br />
#Register in fedcloud VO <br />
#Registration in AppDB to have access to private EGI VM image repository <br />
#What operating systems are supported<br />
<br />
=Cloud management frameworks=<br />
<br />
== OpenStack ==<br />
<br />
EGI Cloud site can be based on OpenStack software with some EGI extensions. See deployment schema (''Note: <span style="color: rgb(51,102,255);">high level description on what modules are to be put on which machines.</span>'') <br />
<br />
=== OpenStack installation ===<br />
Integration with FedCloud requires a working OpenStack installation. Follow the general documentation at http://docs.openstack.org/, there are packages ready to use for most distributions (check for example [https://openstack.redhat.com/Main_Page RDO] for RedHat based distributions). <br />
<br />
OpenStack integration with FedCloud is known to work with the following versions of OpenStack:<br />
* ''Havana'' (EOL by OpenStack, should not be used in production)<br />
* '''Icehouse'''<br />
* '''Juno'''<br />
* '''Kilo'''<br />
<br />
Suggested list of services to provide FedCloud integration:<br />
* Keystone service must be available in any case.<br />
* If providing OCCI access (VM management):<br />
** Nova<br />
** Cinder<br />
** Glance<br />
** Neutron (nova-network can also be used for legacy installations), [http://docs.openstack.org/havana/install-guide/install/yum/content/section_networking-routers-with-private-networks.html Per-tenant routers with private networks] configuration is known to work.<br />
* If providing CDMI access (Object storage):<br />
** Swift<br />
<br />
=== OpenStack integration ===<br />
<br />
==== Integration Prerequirements ====<br />
* Working OpenStack installation.<br />
* Valid IGTF-trusted host certificates for Keystone. You may also use host certificates for OCCI if serving nova-api via https.<br />
* Recommended policy for nova to avoid users accessing other users resources:<br />
<pre><br />
[root@egi-cloud]# sed -i 's|"admin_or_owner": "is_admin:True or project_id:%(project_id)s",|"admin_or_owner": "is_admin:True or project_id:%(project_id)s",\n "admin_or_user": "is_admin:True or user_id:%(user_id)s",|g' /etc/nova/policy.json<br />
[root@egi-cloud]# sed -i 's|"default": "rule:admin_or_owner",|"default": "rule:admin_or_user",|g' /etc/nova/policy.json<br />
[root@egi-cloud]# sed -i 's|"compute:get_all": "",|"compute:get": "rule:admin_or_owner",\n "compute:get_all": "",|g' /etc/nova/policy.json<br />
</pre><br />
<br />
{| style="border:1px solid black; padding:5px; margin: auto; width: 1000px;"<br />
|+ OpenStack-based site with FedCloud integration components<br />
| [[File:Openstack-fedcloud.png|800px]]<br />
|-<br />
|<br />
The following components must be installed alongside OpenStack:<br />
* '''OCCI-OS''', which provides a standard OCCI interface. It translates between OpenStack API and OCCI.<br />
* '''Keystone-VOMS''', which allow users with a valid VOMS proxy to access the OpenStack deployment.<br />
* '''cASO''' scripts, which collect accounting data from OpenStack and publish those into EGI's APEL instance.<br />
* '''BDII''', which registers the site's configuration and description through the EGI Information System to facilitate service discovery. <br />
* '''vmcatcher''', which checks the [https://appdb.egi.eu/browse/cloud EGI App DB] for new or updated images that need to be supported on the site. It downloads images and registers them with OpenStack, so that they can be used in resource instantiation.<br />
|}<br />
<br />
==== EGI User Management/AAI ====<br />
<br />
Every FedCloud site must support authentication of users with X.509 certificates with VOMS extensions. The [https://ifca.github.io/keystone-voms Keystone-VOMS] extension enables this kind of authentication on Keystone. <br />
<br />
* Installation: documentation on the installation is available at [https://keystone-voms.readthedocs.org/ Keystone-voms documentation]. Make sure to use the correct documentation for your OpenStack version. <br />
<br />
* Take into account that using keystone-voms plugin will enforce the use of https for your Keystone service, you will need to update your URLs at the Keystone catalog and in the configuration of your services:<br />
** You will probably need to include your CA to your system's CA bundle to avoid certificate validation issues: <code>/etc/ssl/certs/ca-certificates.crt</code> from the <code>ca-certificates</code> package on Debian/Ubuntu systems or <code>/etc/pki/tls/certs/ca-bundle.crt</code> from the <code>ca-certificates</code> on RH and derived systems. Check the packages documentation to add a new CA to those bundles.<br />
** replace http with https in <code>auth_[protocol|uri|url]</code> and <code>auth_[host|uri|url]</code> in the nova, cinder, glance and neutron config files (<code>/etc/nova/nova.conf</code>, <code>/etc/nova/api-paste.ini</code>, <code>/etc/neutron/neutron.conf</code>, <code>/etc/neutron/api-paste.ini</code>, <code>/etc/neutron/metadata_agent.ini</code>, <code>/etc/cinder/cinder.conf</code>, <code>/etc/cinder/api-paste.ini</code>, <code>/etc/glance/glance-api.conf</code>, <code>/etc/glance/glance-registry.conf</code>, <code>/etc/glance/glance-cache.conf</code>) and any other service that needs to check keystone tokens. <br />
** You can update the URLs of the services directly in the database:<br />
<pre><br />
mysql> use keystone;<br />
mysql> update endpoint set url="https://<keystone-host>:5000/v2.0" where url="http://<keystone-host>:5000/v2.0";<br />
mysql> update endpoint set url="https://<keystone-host>:35357/v2.0" where url="http://<keystone-host>:35357/v2.0";<br />
</pre><br />
<br />
<br />
* Support for EGI VOs: [[HOWTO16 | VOMS configuration]], you should configure fedcloud.egi.eu, dteam and ops VOs.<br />
<br />
* VOMS-Keystone configuration: most sites should enable the <code>autocreate_users</code> option in the <code>[voms]</code> section of [https://keystone-voms.readthedocs.org/en/latest/configuration.html Keystone-VOMS configuration]. This will enable that new users are automatically created in your local keystone the first time they login into your site.<br />
<br />
==== EGI Virtual Machine Management Interface -- OCCI ====<br />
<br />
OCCI is the EGI-approved access method for computing resources that VM management cloud services must expose. [https://github.com/EGI-FCTF/occi-os OCCI-OS] is the recommended software to provide this capability.<br />
<br />
OCCI-OS can be installed from the github repo (recommended) or by using pip (packages may not be up-to-date!). The module must be installed on the machines hosting your nova-api. Installation instructions are available in the <code>README.md</code> file of the repo. Before installing OCCI-OS, you should manually install pyssf (<code>pip install pyssf</code>). If installing from the github repo, '''be sure to select the appropriate branch for your OpenStack installation''', e.g. for an OpenStack Icehouse installation:<br />
<pre><br />
$ pip install pyssf<br />
<br />
$ git clone https://github.com/EGI-FCTF/occi-os.git -b stable/icehouse<br />
Cloning into 'occi-os'...<br />
remote: Counting objects: 1312, done.<br />
remote: Total 1312 (delta 0), reused 0 (delta 0), pack-reused 1312<br />
Receiving objects: 100% (1312/1312), 357.53 KiB | 0 bytes/s, done.<br />
Resolving deltas: 100% (752/752), done.<br />
Checking connectivity... done.<br />
<br />
$ cd occi-os<br />
$ python setup.py install<br />
running install<br />
running bdist_egg<br />
running egg_info<br />
creating openstackocci_icehouse.egg-info<br />
...<br />
Finished processing dependencies for openstackocci-icehouse==1.0<br />
</pre><br />
<br />
Configuration is also detailed in the [https://github.com/EGI-FCTF/occi-os/#configuration OCCI-OS readme file].<br />
<br />
==== EGI Accounting ====<br />
<br />
Every cloud site must publish utilization data to the EGI accounting database. You will need to install [https://github.com/IFCA/caso cASO], a pluggable extractor of Cloud Accounting Usage Records from OpenStack.<br />
* Latest version is available at PyPi: (https://pypi.python.org/pypi/caso/), you can install it with <code>pip install caso</code>.<br />
* Check the [http://caso.readthedocs.org/en/latest/ cASO documentation] includes how to install and configure OpenStack for generating the accounting records.<br />
* Source code available at [https://github.com/IFCA/caso cASO github repo]<br />
* Packages for Ubuntu distributions are build at [https://build.opensuse.org/project/show/home:aloga:cloud:integration OpenSUSE build service home:aloga:cloud:integration project]<br />
<br />
In order to send the records to the accounting database, you will also need to configure SSM. Follow the [https://wiki.egi.eu/wiki/Fedcloud-tf:WorkGroups:Scenario4#Publishing_Records publishing records documentation at the accounting scenario]<br />
<br />
==== EGI Information System ====<br />
<br />
Sites must publish information to EGI information system which is based on BDII. There is a common [https://github.com/EGI-FCTF/cloud-bdii-provider bdii provider] for all cloud management frameworks. Information on installation and configuration is available in [https://github.com/EGI-FCTF/cloud-bdii-provider/blob/master/README.md the cloud-bdii-provider README.md] and in the [[Fedclouds BDII instructions]], there is a [[Fedclouds_BDII_instructions#OpenStack|specific section with OpenStack details]].<br />
<br />
==== EGI Image Management ====<br />
<br />
Sites in FedCloud offering VM management capability must give access to VO-endorsed VM images. This functionality is provided with vmcatcher (that is able to subscribe to the image lists available in AppDB) and a set of tools that are able to push the subscribed images into the glance catalog. In order to subscribe to VO-wide image lists, you need to have a valid access token to the AppDB. Check [https://wiki.appdb.egi.eu/main:faq:how_to_get_access_to_vo-wide_image_lists how to access to VO-wide image lists] and [https://wiki.appdb.egi.eu/main:faq:how_to_subscribe_to_a_private_image_list_using_the_vmcatcher how to subscribe to a private image list] documentation for more information.<br />
<br />
Please refer to [https://github.com/hepix-virtualisation/vmcatcher vmcatcher documentation] for installation. <br />
<br />
Vmcatcher may be branched to Openstack Glance catalog using [https://appdb.egi.eu/store/software/python.glancepush python-glancepush] tool and [https://appdb.egi.eu/store/software/openstack.handler.for.vmcatcher Openstack Handler for Vmcatcher] event handler. To install and configure glancepush and the handler, you can refer to the following instructions: <br />
<br />
*Install the latest release of glancepush via the [https://appdb.egi.eu/store/software/python.glancepush/releases/0.0.x AppDB repository]. For debian based systems, just download the tarball, extract it, and execute python setup.py install<br />
<br />
[stack@ubuntu]$ wget http://repository.egi.eu/community/software/python.glancepush/0.0.X/releases/generic/0.0.6/python-glancepush-0.0.6.tar.gz<br />
[stack@ubuntu]$ tar -zxvf python-glancepush-0.0.6.tar.gz<br />
[stack@ubuntu]$ python setup.py install<br />
<br />
while for RHEL6 you can run: <br />
<br />
[stack@rhel]$ yum localinstall http://repository.egi.eu/community/software/python.glancepush/0.0.X/releases/sl/6/x86_64/RPMS/python-glancepush-0.0.6-1.noarch.rpm<br />
<br />
*Then, configure glancepush directories<br />
<br />
[stack@ubuntu]$ sudo mkdir -p /var/spool/glancepush /etc/glancepush/log /etc/glancepush/transform/ /etc/glancepush/clouds<br />
[stack@ubuntu]$ sudo chown stack:stack -R /var/spool/glancepush /etc/glancepush /var/log/glancepush/<br />
<br />
*Copy the file /etc/keystone/voms.json to /etc/glancepush/voms.json. Then create a file in clouds file for every VO to which you are subscribed. For example, if you're subscribed to fedcloud, atlas and lhcb, you'll need 3 files in the /etc/glancepush/clouds directory with the credentials for this VO/tenants, for example:<br />
<br />
[general]<br />
# Tenant for this VO. Must match the tenant defined in voms.json file<br />
testing_tenant=egi<br />
# Identity service endpoint (Keystone)<br />
endpoint_url=https://server4-eupt.unizar.es:5000/v2.0<br />
# User Password<br />
password=123456<br />
# User<br />
username=John<br />
# Set this to true if you're NOT using self-signed certificates<br />
is_secure=True<br />
# SSH private key that will be used to perform policy checks (to be done)<br />
ssh_key=Carlos_lxbifi81<br />
# WARNING: Only define the next variable if you're going to need it. Otherwise you may encounter problems<br />
cacert=path_to_your_cert<br />
<br />
*Install [https://appdb.egi.eu/store/software/openstack.handler.for.vmcatcher Openstack handler for vmcatcher]. For debian based systems, just download the tarball, extract it and execute python setup.py install<br />
<br />
[stack@ubuntu]$ wget http://repository.egi.eu/community/software/openstack.handler.for.vmcatcher/0.0.X/releases/generic/0.0.7/gpvcmupdate-0.0.7.tar.gz<br />
[stack@ubuntu]$ tar -zxvf gpvcmupdate-0.0.7.tar.gz<br />
[stack@ubuntu]$ python setup.py install<br />
<br />
while for RHEL6 you can run: <br />
<br />
[stack@rhel]$ yum localinstall http://repository.egi.eu/community/software/openstack.handler.for.vmcatcher/0.0.X/releases/sl/6/x86_64/RPMS/gpvcmupdate-0.0.7-1.noarch.rpm<br />
<br />
*Create the vmcatcher folders for OpenStack<br />
<br />
[stack@ubuntu]$ mkdir -p /opt/stack/vmcatcher/cache /opt/stack/vmcatcher/cache/partial /opt/stack/vmcatcher/cache/expired<br />
<br />
*Check that vmcatcher is running properly by listing and subscribing to an image list<br />
<br />
[stack@ubuntu]$ export VMCATCHER_RDBMS="sqlite:////opt/stack/vmcatcher/vmcatcher.db"<br />
[stack@ubuntu]$ vmcatcher_subscribe -l<br />
[stack@ubuntu]$ vmcatcher_subscribe -e -s https://vmcaster.appdb.egi.eu/store/vappliance/tinycorelinux/image.list<br />
[stack@ubuntu]$ vmcatcher_subscribe -l<br />
8ddbd4f6-fb95-4917-b105-c89b5df99dda True None https://vmcaster.appdb.egi.eu/store/vappliance/tinycorelinux/image.list<br />
<br />
*Create a CRON wrapper for vmcatcher, named <code>$HOME/gpvcmupdate/vmcatcher_eventHndl_OS_cron.sh</code>, using the following code<br />
<br />
#!/bin/bash<br />
#Cron handler for VMCatcher image syncronization script for OpenStack<br />
<br />
#Vmcatcher configuration variables<br />
export VMCATCHER_RDBMS="sqlite:////opt/stack/vmcatcher/vmcatcher.db"<br />
export VMCATCHER_CACHE_DIR_CACHE="/opt/stack/vmcatcher/cache"<br />
export VMCATCHER_CACHE_DIR_DOWNLOAD="/opt/stack/vmcatcher/cache/partial"<br />
export VMCATCHER_CACHE_DIR_EXPIRE="/opt/stack/vmcatcher/cache/expired"<br />
export VMCATCHER_CACHE_EVENT="python $HOME/gpvcmupdate/gpvcmupdate.py -D"<br />
<br />
#Update vmcatcher image lists<br />
vmcatcher_subscribe -U<br />
<br />
#Add all the new images to the cache<br />
for a in `vmcatcher_image -l | awk '{if ($2==2) print $1}'`; do<br />
vmcatcher_image -a -u $a<br />
done <br />
<br />
#Update the cache<br />
vmcatcher_cache -v -v<br />
<br />
#Run glancepush<br />
/usr/bin/glancepush.py<br />
<br />
*Set the newly created file as executable<br />
<br />
[stack@ubuntu]$ chmod +x $HOME/gpvcmupdate/vmcatcher_eventHndl_OS_cron.sh<br />
<br />
*Test that the vmcatcher handler is working correctly by running<br />
<br />
[stack@ubuntu]$ $HOME/gpvcmupdate/vmcatcher_eventHndl_OS_cron.sh<br />
INFO:main:Defaulting actions as 'expire', and 'download'.<br />
DEBUG:Events:event 'ProcessPrefix' executed 'python /opt/stack/gpvcmupdate/gpvcmupdate.py'<br />
DEBUG:Events:stdout=<br />
DEBUG:Events:stderr=Ignoring ProcessPrefix event.<br />
INFO:DownloadDir:Downloading '541b01a8-94bd-4545-83a8-6ea07209b440'.<br />
DEBUG:Events:event 'AvailablePrefix' executed 'python /opt/stack/gpvcmupdate/gpvcmupdate.py'<br />
DEBUG:Events:stdout=AvailablePrefix<br />
DEBUG:Events:stderr=<br />
INFO:CacheMan:moved file 541b01a8-94bd-4545-83a8-6ea07209b440<br />
DEBUG:Events:event 'AvailablePostfix' executed 'python /opt/stack/gpvcmupdate/gpvcmupdate.py'<br />
DEBUG:Events:stdout=AvailablePostfixCreating Metadata Files<br />
DEBUG:Events:stderr=<br />
DEBUG:Events:event 'ProcessPostfix' executed 'python /opt/stack/gpvcmupdate/gpvcmupdate.py'<br />
DEBUG:Events:stdout=<br />
DEBUG:Events:stderr=Ignoring ProcessPostfix event.<br />
<br />
<br> <br />
<br />
*Add the following line to the stack user crontab:<br />
<br />
50 */6 * * * $HOME/gpvcmupdate/vmcatcher_eventHndl_OS_cron.sh &gt;&gt; /var/log/glancepush/vmcatcher.log 2&gt;&amp;1<br />
<br />
''NOTES:'' <br />
<br />
*It is recommended to execute glancepush and vmcatcher_cache as stack or other non-root user. <br />
*VMcatcher expired images are removed from OS.<br />
<br />
==== Registration of services in GOCDB ====<br />
<br />
Site cloud services must be registered in [https://goc.egi.eu EGI Configuration Management Database (GOCDB)]. If you are creating a new site for your cloud services, check the [[PROC09|PROC09 Resource Centre Registration and Certification]] procedure. Services can also coexist within an existing (grid) site.<br />
<br />
If offering OCCI interface, sites should register the following services:<br />
* eu.egi.cloud.vm-management.occi for the OCCI endpoint offered by the site. Please note the special endpoint URL syntax described at [[Federated_Cloud_Architecture#Central_service_registry:_GOCDB|GOCDB usage in FedCloud]]<br />
* eu.egi.cloud.accounting (host should be your OCCI machine)<br />
* eu.egi.cloud.vm-metadata.vmcatcher (also host is your OCCI machine)<br />
* Site should also declare the following properties using the ''Site Extension Properties'' feature:<br />
*# Max number of virtual cores for VM with parameter name: <code>cloud_max_cores4VM</code> <br />
*# Max amount of RAM for VM with parameter name: <code>cloud_max_RAM4VM</code> using the format: value+unit, e.g. "16GB".<br />
*# Max amount of storage that could be mounted in a VM with parameter name: <code>cloud_max_storage4VM</code> using the format: value+unit, e.g. "16GB".<br />
<br />
If offering CDMI interface, site should register:<br />
* eu.egi.cloud.storage-management.cdmi. Note also the enpoint URL syntax described at [[Federated_Cloud_Architecture#Central_service_registry:_GOCDB|GOCDB usage in FedCloud]]<br />
<br />
Once the site services are registered in GOCDB and set as monitored they will be checked by the [https://cloudmon.egi.eu/nagios Cloud SAM instance].<br />
<br />
=== Installation Validation ===<br />
<br />
You can check your installation following these steps: <br />
<br />
#Check in [https://cloudmon.egi.eu/nagios Cloudmon] that your services are listed and are passing the tests. If all the tests are OK, your installation is already in good shape. <br />
#Check that you are publishing cloud information in your site BDII:<br><code>ldapsearch -x -h &lt;site bdii host&gt; -p 2170 -b Glue2GroupID=cloud,Glue2DomainID=&lt;your site name&gt;,o=glue</code><br />
#Check that all the images listed in the [https://appdb.egi.eu/store/vo/fedcloud.egi.eu AppDB&nbsp;page for fedlcoud.egi.eu VO&nbsp; ]are listed in your BDII. This sample query will return all the VM ids registered in your BDII:<br><code>ldapsearch -x -h &lt;site bdii host&gt; -p 2170 -b Glue2GroupID=cloud,Glue2DomainID=&lt;your site name&gt;,o=glue objectClass=GLUE2ApplicationEnvironment GLUE2ApplicationEnvironmentRepository</code><br />
#Try to start one of those images in your cloud (you can do it with nova or OCCI) commands, the result should be the same. <br><br />
#Execute the [[HOWTO04_Site_Certification_Manual_tests#Check_the_functionality_of_the_cloud_elements|site certification manual tests]] against your endpoints.<br><br />
#Check in the [http://accounting-devel.egi.eu/cloud.php accounting portal] that your site is listed and the values reported look consistent with the usage of your site.<br />
<br />
== OpenNebula ==<br />
<br />
=== OpenNebula FedCloud Site Architecture ===<br />
<br />
==== Components ====<br />
<br />
<!--{| style="border:1px solid black; background-color:yellow; color: black; padding:5px; font-size:140%; width: 90%; margin: auto;"<br />
| style="padding-right: 15px; padding-left: 15px;" | <br />
|[[File:Baustelle.png]] This part is '''under construction'''. <br />
|}--><br />
<br />
EGI Cloud Site based on OpenNebula is an ordinary OpenNebula installation with some EGI-specific integration components. There are no additional requirements placed on internal site architecture.<br />
<br />
{| style="border:1px solid black; padding:5px; margin: auto; width: 1000px;"<br />
|+ OpenNebula-based site with FedCloud integration components<br />
| [[File:OpenNebulaSite.png]]<br />
|-<br />
|<br />
The following components must be installed alongside OpenNebula:<br />
* '''vmcatcher''', which checks the [https://appdb.egi.eu/browse/cloud EGI App DB] for new or updated images that need to be supported on the site. It downloads images and registers them with OpenNebula, so that they can be used in resource instantiation. Vmcatcher configuration is [[#EGI_Image_Management_2|explained bellow]].<br />
* '''rOCCI-server''', which provides a standard OCCI interface. It translates between OpenNebula API and OCCI. It must be configured to use its ''opennebula'' backend, and to use ''voms'' for authentication. Follow the [[rOCCI:ROCCI-server_Admin_Guide|rOCCI-server Admin Guide]] for installation, and check [[#rOCCI-server + VOMS|bellow]] for FedCloud-specific configuration.<br />
* '''local perun scripts''', which allow perun to set up, block and remove user accounts from OpenNebula, thus managing the full life cycle of a user account. Local script configuration is [[#Perun integration|explained bellow]].<br />
* '''oneacct''' scripts, which collect accounting data from OpenNebula and publish those into EGI's APEL instance. Oneacct configuration is explained at the [[Fedcloud-tf:WorkGroups:Scenario4#OpenNebula_Accounting_Scripts|FedCloud Accounting]] page.<br />
* '''BDII''', which registers the site's configuration and description through the EGI Information System to facilitate service discovery. Configuration is [[#EGI_Information_System_2|explained bellow]].<br />
|}<br />
<br />
Note: '''CDMI''' storage endpoints are currently not supported for OpenNebula-based sites.<br />
<br />
Note 2: OpenNebula ''Sunstone'' is '''not''' required!<br />
<br />
==== Open Ports ====<br />
<br />
The following ports must be open to allow access to an OpenNebula-based FedCloud sites:<br />
<br />
{| class="wikitable" style="margin: auto; margin-top: 30px; margin-bottom: 30px;"<br />
|+ Open Ports for OpenNebula and other components in FedCloud!<br />
! style="width: 90px;" | Port<br />
! style="width: 110px;" | Application<br />
! style="width: 430px;" | Host<br />
! style="width: 250px;" | Note<br />
|-<br />
|'''22'''/TCP<br />
|'''SSH'''<br />
|OpenNebula '''Server''' Node<br />
|<code>one</code>tools, Perun scripts<br />
|-<br />
|'''2170'''/TCP<br />
|'''BDII'''/LDAP<br />
|BDDI Node (typically the OpenNebula '''Server''' Node)<br />
|EGI Service Discovery<br />
|-<br />
|'''11443'''/TCP<br />
|'''OCCI'''/HTTPs<br />
|'''rOCCI-server''' node (typically the OpenNebula Server Node but can be located elsewhere)<br />
|OCCI cloud resource management<br />
|}<br />
<br />
By nature, open ports cannot be specified for '''OpenNebula hosts''', which are used to run virtual machines. Their requirements for open ports cannot be known beforehand.<br />
<br />
==== Service Accounts ====<br />
<br />
This is an overview of service accounts used in an OpenNebula-based FedCloud site. The names are default and can be changed if required.<br />
<br />
{| class="wikitable" style="margin: auto; margin-top: 30px; margin-bottom: 30px;"<br />
|+ Service Accounts in OpenNebula sites in FedCloud!<br />
! style="width: 90px;" | Type<br />
! style="width: 110px;" | Account name<br />
! style="width: 180px;" | Host<br />
! style="width: 500px;" | Use<br />
|-<br />
|rowspan="4"|System accounts<br />
|<code>oneadmin</code><br />
|OpenNebula Server<br />
|'''Default''' management account in OpenNebula. Also used by the '''Perun''' scripts, which access the account with SSH.<br />
|-<br />
|<code>rocci</code><br />
|rOCCI-server host (typically OpenNebula server)<br />
|Apache application processes for the '''rOCCI-server'''. It is only a service account, no access required.<br />
|-<br />
|<code>apel</code><br />
|OpenNebula server<br />
|Service account used to run '''APEL export''' scripts. Just a service account, no access required.<br />
|-<br />
|<code>openldap</code><br />
|OpenNebula server<br />
|Service account used to run LDAP for '''BDII'''. Just a service account, no access required.<br />
|-<br />
|OpenNebula accounts<br />
|<code>rocci</code><br />
|OpenNebula Server<br />
|Used by the '''rOCCI-server''' to perform tasks through the OpenNebula API.<br />
|}<br />
<br />
=== OpenNebula Installation ===<br />
Follow [http://opennebula.org/documentation/ OpenNebula Documentation] and install OpenNebula with enabled X.509 authentication support.<br />
<br />
The following OpenNebula versions are supported:<br />
* OpenNebula v4.4.x (legacy)<br />
* OpenNebula v4.6.x<br />
* OpenNebula v4.8.x<br />
* OpenNebula v4.10.x<br />
* OpenNebula v4.12.x<br />
<br />
=== OpenNebula Integration ===<br />
<br />
==== Integration Prerequisites ====<br />
* Working OpenNebula installation with X.509 support enabled. Resource Providers are encouraged to follow the [http://docs.opennebula.org/4.12/administration/authentication/x509_auth.html step-by-step configuration guide provided by OpenNebula developers]. There is no need to change authentication driver for the oneadmin user or create any user accounts manually at this time. <br />
* Valid IGTF-trusted host certificates for selected hosts.<br />
<br />
==== EGI Virtual Machine Management Interface -- OCCI ====<br />
<br />
See [[rOCCI:ROCCI-server_Admin_Guide|rOCCI-server Installation Guide]].<br />
<br />
==== EGI User Management/AAI ====<br />
<br />
===== rOCCI-server + VOMS =====<br />
<br />
*Configure OpenNebula's x509 auth, modify /etc/one/auth/x509_auth.conf file:<br />
<br />
# Path to the trusted CA directory. It should contain the trusted CA's for<br />
# the server, each CA certificate shoud be name CA_hash.0<br />
:ca_dir: "/etc/grid-security/certificates"<br />
<br />
For more information have a look at the official OpenNebula documentation [http://opennebula.org/documentation]<br />
<br />
*rOCCI-server <br />
Example VHOST configuration file for Apache2 with only VOMS authentication enabled:<br />
<br />
<pre><br />
<VirtualHost *:11443><br />
# if you wish to change the default Ruby used to run this app<br />
PassengerRuby /opt/occi-server/embedded/bin/ruby<br />
<br />
# enable SSL<br />
SSLEngine on<br />
<br />
# for security reasons you may restrict the SSL protocol, but some clients may fail if SSLv2 is not supported<br />
SSLProtocol All -SSLv2 -SSLv3<br />
<br />
# this should point to your server host certificate<br />
SSLCertificateFile /etc/grid-security/hostcert.pem<br />
<br />
# this should point to your server host key<br />
SSLCertificateKeyFile /etc/grid-security/hostkey.pem<br />
<br />
# directory containing the Root CA certificates and their hashes<br />
SSLCACertificatePath /etc/grid-security/certificates<br />
<br />
# directory containing CRLs<br />
SSLCARevocationPath /etc/grid-security/certificates<br />
<br />
# set to optional, this tells Apache to attempt to verify SSL certificates if provided<br />
# for X.509 access with GridSite/VOMS, however, set to 'require'<br />
#SSLVerifyClient optional<br />
SSLVerifyClient require<br />
<br />
# if you have multiple CAs in the file above, you may need to increase the verify depht<br />
SSLVerifyDepth 10<br />
<br />
# enable passing of SSL variables to passenger. For GridSite/VOMS, enable also exporting certificate data<br />
SSLOptions +StdEnvVars +ExportCertData<br />
<br />
# configure OpenSSL inside rOCCI-server to validate peer certificates (for CMFs)<br />
#SetEnv SSL_CERT_FILE /path/to/ca_bundle.crt<br />
SetEnv SSL_CERT_DIR /etc/grid-security/certificates<br />
<br />
# set RackEnv<br />
RackEnv production<br />
LogLevel info<br />
<br />
ServerName occi.host.example.org<br />
# important, this needs to point to the public folder of your rOCCI-server<br />
DocumentRoot /opt/occi-server/embedded/app/rOCCI-server/public<br />
<Directory /opt/occi-server/embedded/app/rOCCI-server/public><br />
## variables (and is needed for gridsite-admin.cgi to work.)<br />
GridSiteEnvs on<br />
## Nice GridSite directory listings (without truncating file names!)<br />
GridSiteIndexes off<br />
## If this is greater than zero, we will accept GSI Proxies for clients<br />
## (full client certificates - eg inside web browsers - are always ok)<br />
GridSiteGSIProxyLimit 4<br />
## This directive allows authorized people to write/delete files<br />
## from non-browser clients - eg with htcp(1)<br />
GridSiteMethods ""<br />
<br />
Allow from all<br />
Options -MultiViews<br />
</Directory><br />
<br />
# configuration for Passenger<br />
PassengerUser rocci<br />
PassengerGroup rocci<br />
PassengerMinInstances 3<br />
PassengerFriendlyErrorPages off<br />
<br />
# configuration for rOCCI-server<br />
## common<br />
SetEnv ROCCI_SERVER_LOG_DIR /var/log/occi-server<br />
SetEnv ROCCI_SERVER_ETC_DIR /etc/occi-server<br />
<br />
SetEnv ROCCI_SERVER_PROTOCOL https<br />
SetEnv ROCCI_SERVER_HOSTNAME occi.host.example.org<br />
SetEnv ROCCI_SERVER_PORT 11443<br />
SetEnv ROCCI_SERVER_AUTHN_STRATEGIES "voms"<br />
SetEnv ROCCI_SERVER_HOOKS oneuser_autocreate<br />
SetEnv ROCCI_SERVER_BACKEND opennebula<br />
SetEnv ROCCI_SERVER_LOG_LEVEL info<br />
SetEnv ROCCI_SERVER_LOG_REQUESTS_IN_DEBUG no<br />
SetEnv ROCCI_SERVER_TMP /tmp/occi_server<br />
SetEnv ROCCI_SERVER_MEMCACHES localhost:11211<br />
<br />
## experimental<br />
SetEnv ROCCI_SERVER_ALLOW_EXPERIMENTAL_MIMES no<br />
<br />
## authN configuration<br />
SetEnv ROCCI_SERVER_AUTHN_VOMS_ROBOT_SUBPROXY_IDENTITY_ENABLE no<br />
<br />
## hooks<br />
#SetEnv ROCCI_SERVER_USER_BLACKLIST_HOOK_USER_BLACKLIST "/path/to/yml/file.yml"<br />
#SetEnv ROCCI_SERVER_USER_BLACKLIST_HOOK_FILTERED_STRATEGIES "voms x509 basic"<br />
SetEnv ROCCI_SERVER_ONEUSER_AUTOCREATE_HOOK_VO_NAMES "dteam ops"<br />
<br />
## ONE backend<br />
SetEnv ROCCI_SERVER_ONE_XMLRPC http://localhost:2633/RPC2<br />
SetEnv ROCCI_SERVER_ONE_USER rocci<br />
SetEnv ROCCI_SERVER_ONE_PASSWD yourincrediblylonganddifficulttoguesspassword<br />
</VirtualHost><br />
</pre><br />
<br />
It is strongly recommended to set '''SSLVerifyClient require''' and '''SetEnv ROCCI_SERVER_AUTHN_STRATEGIES "voms"'''!<br />
<br />
* Support for EGI VOs: [[HOWTO16 | VOMS configuration]]<br />
* Create empty groups ''fedcloud.egi.eu'', ''ops'' and ''dteam'' in OpenNebula.<br />
<br />
===== Perun integration =====<br />
The current rOCCI-server implementation doesn’t handle user management and identity propagation hence integration with a third-party service is necessary. The [https://perun.metacentrum.cz/perun-gui-cert/ Perun VO] management server developed and maintained by CESNET is used to provide user management capabilities for OpenNebula Resource Providers. It uses locally installed scripts (fully under the control of the Resource Provider in question) to propagate changes in the user pool to all registered Resource Providers. They are required to install and configure (if need be) these scripts and report back to EGI Cloud Federation for registration in Perun. Installation and configuration details are available online in the [https://github.com/EGI-FCTF/fctf-perun EGI-FCTF/fctf-perun github repository].<br />
<br />
Remember that Perun requires '''SSH access''' to your machine, so that it can invoke the scripts and push user account changes to your site!<br />
<br />
===== Manual account management =====<br />
<br />
If you want to use X.509/VOMS authentication for your users, you need to create users in OpenNebula with the X.509 driver. For a user named 'johnsmith' from the <code>fedcloud.egi.eu</code> VO the command may look like this <br />
<br />
$ oneuser create johnsmith "/DC=es/DC=irisgrid/O=cesga/CN=johnsmith/VO=fedcloud.egi.eu/Role=NULL/Capability=NULL" --driver x509<br />
<br />
*And its properties:<br />
<br />
$ oneuser update &lt;id_x509_user&gt;<br />
X509_DN="/DC=es/DC=irisgrid/O=cesga/CN=johnsmith"<br />
<br />
==== EGI Accounting ====<br />
<br />
See [[Fedcloud-tf:WorkGroups:Scenario4#OpenNebula_Accounting_Scripts|OpenNebula Accounting Scripts]].<br />
<br />
==== EGI Information System ====<br />
<br />
Sites must publish information to EGI information system which is based on BDII. There is a common [https://github.com/EGI-FCTF/cloud-bdii-provider bdii provider] for all cloud management frameworks. Information on installation and configuration is available in the cloud-bdii-provider [https://github.com/EGI-FCTF/cloud-bdii-provider/blob/master/README.md README.md] and in the [[Fedclouds BDII instructions]], there is a [[Fedclouds_BDII_instructions#OpenNebula_.2B_rOCCI|specific section with OpenNebula details]].<br />
<br />
==== EGI Image Management ====<br />
<span style="color:#FFFFFF; background:#FF0000">The current version of this integration component requires manual intervention from the site administrator when a new appliance/image is registered (NOT on subsequent updates). The site administrator must manually create a Virtual Machine Template and, in this template, reference the image in question by IMAGE and IMAGE_UNAME. This is a temporary workaround and will be removed in the next release of the vmcatcher integration component.</span><br />
<br />
Sites in FedCloud offering VM management capability must give access to VO-endorsed VM images. This functionality is provided with vmcatcher (that is able to subscribe to the image lists available in AppDB) and a set of tools that are able to push the subscribed images into the glance catalog. In order to subscribe to VO-wide image lists, you need to have a valid access token to the AppDB. Check [https://wiki.appdb.egi.eu/main:faq:how_to_get_access_to_vo-wide_image_lists how to access to VO-wide image lists] and [https://wiki.appdb.egi.eu/main:faq:how_to_subscribe_to_a_private_image_list_using_the_vmcatcher how to subscribe to a private image list] documentation for more information.<br />
<br />
Please refer to [https://github.com/hepix-virtualisation/vmcatcher vmcatcher documentation] for installation. <br />
<br />
[https://github.com/grid-admin/vmcatcher_eventHndlExpl_ON vmcatcher_eventHndlExpl_ON] is a VMcatcher event handler for OpenNebula to store or disable images based on VMcatcher response. The followign guide will show how to install and configure vmCatcher handler as oneadmin user, directly from github. The configuration will automatically syncronize OpenNebula Image datastore with the registered vmcatcher images. <br />
<br />
*Install pre-requisites for VMCatcher handler<br />
<br />
[oneadmin@one-sandbox] sudo yum install -y qemu-img<br />
<br />
*Install VMcatcher handler from github<br />
<br />
[oneadmin@one-sandbox]$ mkdir $HOME/vmcatcher_eventHndlExpl_ON<br />
[oneadmin@one-sandbox]$ cd $HOME/vmcatcher_eventHndlExpl_ON<br />
[oneadmin@one-sandbox]$ wget http://github.com/grid-admin/vmcatcher_eventHndlExpl_ON/archive/v0.0.8.zip -O vmcatcher_eventHndlExpl_ON.zip<br />
[oneadmin@one-sandbox]$ unzip vmcatcher_eventHndlExpl_ON.zip<br />
[oneadmin@one-sandbox]$ mv vmcatcher_eventHndlExpl_ON*/* ./<br />
[oneadmin@one-sandbox]$ rmdir vmcatcher_eventHndlExpl_ON-*<br />
<br />
*Create the vmcatcher folders for ON (do not use /var/lib/one/ or other OpenNebula default directories for the vmcatcher cache, since you cannot import images into OpenNebula from these directories. Also, since this directory will host a copy of all the images downloaded via vmcatcher, it is suggested to place the directory into a separate disk)<br />
<br />
[oneadmin@one-sandbox]$ sudo mkdir -p /opt/vmcatcher-ON/cache /opt/vmcatcher-ON/cache/partial /opt/vmcatcher-ON/cache/expired /opt/vmcatcher-ON/cache/templates<br />
[oneadmin@one-sandbox]$ sudo chown oneadmin:oneadmin -R /opt/vmcatcher-ON<br />
<br />
*Check that vmcatcher is running properly by listing and subscribing to an image list<br />
<br />
[oneadmin@one-sandbox]$ export VMCATCHER_RDBMS="sqlite:////opt/vmcatcher-ON/vmcatcher.db"<br />
[oneadmin@one-sandbox]$ vmcatcher_subscribe -l<br />
[oneadmin@one-sandbox]$ vmcatcher_subscribe -e -s https://vmcaster.appdb.egi.eu/store/vappliance/tinycorelinux/image.list<br />
[oneadmin@one-sandbox]$ vmcatcher_subscribe -l<br />
8ddbd4f6-fb95-4917-b105-c89b5df99dda True None https://vmcaster.appdb.egi.eu/store/vappliance/tinycorelinux/image.list<br />
<br />
*Create a CRON wrapper for vmcatcher, named <code>/var/lib/one/vmcatcher_eventHndlExpl_ON/vmcatcher_eventHndl_ON_cron.sh</code>, using the following code<br />
<br />
#!/bin/bash<br />
#Cron handler for VMCatcher image syncronization script for OpenNebula<br />
<br />
#Vmcatcher configuration variables<br />
export VMCATCHER_RDBMS="sqlite:////opt/vmcatcher-ON/vmcatcher.db"<br />
export VMCATCHER_CACHE_DIR_CACHE="/opt/vmcatcher-ON/cache"<br />
export VMCATCHER_CACHE_DIR_DOWNLOAD="/opt/vmcatcher-ON/cache/partial"<br />
export VMCATCHER_CACHE_DIR_EXPIRE="/opt/vmcatcher-ON/cache/expired"<br />
export VMCATCHER_CACHE_EVENT="python $HOME/vmcatcher_eventHndlExpl_ON/vmcatcher_eventHndl_ON"<br />
<br />
#Update vmcatcher image lists<br />
vmcatcher_subscribe -U<br />
<br />
#Add all the new images to the cache<br />
for a in `vmcatcher_image -l | awk '{if ($2==2) print $1}'`; do<br />
vmcatcher_image -a -u $a<br />
done<br />
<br />
#Update the cache<br />
vmcatcher_cache -v -v<br />
<br />
*Test that the vmcatcher handler is working correctly by running<br />
<br />
[oneadmin@one-sandbox]$ chmod +x $HOME/vmcatcher_eventHndlExpl_ON/vmcatcher_eventHndl_ON_cron.sh<br />
[oneadmin@one-sandbox]$ $HOME/vmcatcher_eventHndlExpl_ON/vmcatcher_eventHndl_ON_cron.sh<br />
INFO:main:Defaulting actions as 'expire', and 'download'.<br />
DEBUG:Events:event 'ProcessPrefix' executed 'python /var/lib/one/vmcatcher_eventHndlExpl_ON/vmcatcher_eventHndl_ON'<br />
DEBUG:Events:stdout=<br />
DEBUG:Events:stderr=2014-07-16 12:25:49,586; DEBUG; vmcatcher_eventHndl_ON; main -- Processing event 'ProcessPrefix'<br />
2014-07-16 12:25:49,586; WARNING; vmcatcher_eventHndl_ON; main -- Ignoring event 'ProcessPrefix'<br />
<br />
INFO:DownloadDir:Downloading '541b01a8-94bd-4545-83a8-6ea07209b440'.<br />
DEBUG:Events:event 'AvailablePrefix' executed 'python /var/lib/one/vmcatcher_eventHndlExpl_ON/vmcatcher_eventHndl_ON'<br />
DEBUG:Events:stdout=<br />
DEBUG:Events:stderr=2014-07-16 12:26:00,522; DEBUG; vmcatcher_eventHndl_ON; main -- Processing event 'AvailablePrefix'<br />
2014-07-16 12:26:00,522; WARNING; vmcatcher_eventHndl_ON; main -- Ignoring event 'AvailablePrefix'<br />
<br />
INFO:CacheMan:moved file 541b01a8-94bd-4545-83a8-6ea07209b440<br />
DEBUG:Events:event 'AvailablePostfix' executed 'python /var/lib/one/vmcatcher_eventHndlExpl_ON/vmcatcher_eventHndl_ON'<br />
DEBUG:Events:stdout=<br />
DEBUG:Events:stderr=2014-07-16 12:26:00,567; DEBUG; vmcatcher_eventHndl_ON; main -- Processing event 'AvailablePostfix'<br />
2014-07-16 12:26:00,567; DEBUG; vmcatcher_eventHndl_ON; HandleAvailablePostfix -- Starting HandleAvailablePostfix for '541b01a8-94bd-4545-83a8-6ea07209b440'<br />
2014-07-16 12:26:00,571; INFO; vmcatcher_eventHndl_ON; UntarFile -- /opt/vmcatcher-ON/cache/541b01a8-94bd-4545-83a8-6ea07209b440 is an OVA file. Extracting files...<br />
2014-07-16 12:26:00,599; INFO; vmcatcher_eventHndl_ON; UntarFile -- Converting /opt/vmcatcher-ON/cache/templates/541b01a8-94bd-4545-83a8-6ea07209b440/CoreLinux-disk1.vmdk to raw format.<br />
2014-07-16 12:26:00,641; INFO; vmcatcher_eventHndl_ON; UntarFile -- New RAW image created: /opt/vmcatcher-ON/cache/templates/541b01a8-94bd-4545-83a8-6ea07209b440/CoreLinux-disk1.vmdk.raw<br />
2014-07-16 12:26:00,642; INFO; vmcatcher_eventHndl_ON; HandleAvailablePostfix -- Creating template file /opt/vmcatcher-ON/cache/templates/541b01a8-94bd-4545-83a8-6ea07209b440.one<br />
2014-07-16 12:26:00,780; INFO; vmcatcher_eventHndl_ON; getImageListXML -- Getting image list: oneimage list --xml<br />
2014-07-16 12:26:00,784; INFO; vmcatcher_eventHndl_ON; HandleAvailablePostfix -- There is not a previous image with the same UUID in the OpenNebula infrastructure<br />
2014-07-16 12:26:00,785; INFO; vmcatcher_eventHndl_ON; HandleAvailablePostfix -- Instantiating template: oneimage create -d default /opt/vmcatcher-ON/cache/templates/541b01a8-94bd-4545-83a8-6ea07209b440.one | cut -d ':' -f 2<br />
<br />
DEBUG:Events:event 'ProcessPostfix' executed 'python /var/lib/one/vmcatcher_eventHndlExpl_ON/vmcatcher_eventHndl_ON'<br />
DEBUG:Events:stdout=<br />
DEBUG:Events:stderr=2014-07-16 12:26:01,077; DEBUG; vmcatcher_eventHndl_ON; main -- Processing event 'ProcessPostfix'<br />
2014-07-16 12:26:01,077; WARNING; vmcatcher_eventHndl_ON; main -- Ignoring event 'ProcessPostfix'<br />
<br />
*Add the following line to the oneadmin user crontab:<br />
<br />
50 */6 * * * $HOME/vmcatcher_eventHndlExpl_ON/vmcatcher_eventHndl_ON_cron.sh &gt;&gt; /var/log/vmcatcher.log 2&gt;&amp;1<br />
<br />
''NOTES:'' <br />
<br />
*vmcatcher_cache must be executed as oneadmin user. <br />
*Environment variables can be used to set default values but the command line options will override any set environment options. Set these env variables for oneadmin user: VMCATCHER_RDBMS, VMCATCHER_CACHE_DIR_CACHE, VMCATCHER_CACHE_DIR_DOWNLOAD, VMCATCHER_CACHE_DIR_EXPIRE and VMCATCHER_CACHE_EVENT. <br />
*vmcatcher_eventHndlExpl_ON generates ON image templates. These templates are available from $VMCATCHER_CACHE_DIR_CACHE/templates (templates nomenclature $VMCATCHER_EVENT_DC_IDENTIFIER.one) <br />
*The new ON images include ''VMCATCHER_EVENT_DC_IDENTIFIER = &lt;VMCATCHER_UUID&gt;'' tag. This tag is used to identify Fedcloud VM images. <br />
*VMcatcher expired images are set as disabled by ON. It is up to the RP to remove disabled images or assign the new ones to a specific ON group or user.<br />
<br />
==== Registration of services in GOCDB ====<br />
<br />
Site cloud services must be registered in [https://goc.egi.eu EGI Configuration Management Database (GOCDB)]. If you are creating a new site for your cloud services, check the [[PROC09|PROC09 Resource Centre Registration and Certification]] procedure. Services can also coexist within an existing (grid) site.<br />
<br />
If offering OCCI interface, sites should register the following services:<br />
* eu.egi.cloud.vm-management.occi for the OCCI endpoint offered by the site. Please note the special endpoint URL syntax described at [[Federated_Cloud_Architecture#Central_service_registry:_GOCDB|GOCDB usage in FedCloud]]<br />
* eu.egi.cloud.accounting (host should be your OCCI machine)<br />
* eu.egi.cloud.vm-metadata.vmcatcher (also host is your OCCI machine)<br />
* Site should also declare the following properties using the ''Site Extension Properties'' feature:<br />
*# Max number of virtual cores for VM with parameter name: <code>cloud_max_cores4VM</code> <br />
*# Max amount of RAM for VM with parameter name: <code>cloud_max_RAM4VM</code> using the format: value+unit, e.g. "16GB".<br />
*# Max amount of storage that could be mounted in a VM with parameter name: <code>cloud_max_storage4VM</code> using the format: value+unit, e.g. "16GB".<br />
<br />
Once the site services are registered in GOCDB and set as monitored they will be checked by the [https://cloudmon.egi.eu/nagios Cloud SAM instance].<br />
<br />
=== Installation Validation ===<br />
<br />
You can check your installation following these steps: <br />
<br />
*Check in [https://cloudmon.egi.eu/nagios Cloudmon] that your services are listed and are passing the tests. If all the tests are OK, your installation is already in good shape. <br />
*Check that you are publishing cloud information in your site BDII:<br><code>ldapsearch -x -h &lt;site bdii host&gt; -p 2170 -b Glue2GroupID=cloud,Glue2DomainID=&lt;your site name&gt;,o=glue</code><br />
*Check that all the images listed in the [https://appdb.egi.eu/store/vo/fedcloud.egi.eu AppDB&nbsp;page for fedlcoud.egi.eu VO&nbsp; ]are listed in your BDII. This sample query will return all the template IDs registered in your BDII:<br><code>ldapsearch -x -h &lt;site bdii host&gt; -p 2170 -b Glue2GroupID=cloud,Glue2DomainID=&lt;your site name&gt;,o=glue objectClass=GLUE2ApplicationEnvironment GLUE2ApplicationEnvironmentRepository</code><br />
*Try to start one of those images in your cloud. You can do it with `onetemplate instantiate` or OCCI commands, the result should be the same.<br />
*Execute the [[HOWTO04_Site_Certification_Manual_tests#Check_the_functionality_of_the_cloud_elements|site certification manual tests]] against your endpoints.<br />
*Check in the [http://accounting-devel.egi.eu/cloud.php accounting portal] that your site is listed and the values reported look consistent with the usage of your site.<br />
<br />
= Revision History =<br />
<br />
{| border="3"<br />
|-<br />
! Version <br />
! Authors <br />
! Date <br />
! Comments<br />
|-<br />
| <br />
| <br />
| <br />
| <br />
|}<br />
<br />
[[Category:Operations_Manuals]]</div>
Patrykl
https://wiki.egi.eu/w/index.php?title=Fedcloud-tf:ResourceProviders&diff=68204
Fedcloud-tf:ResourceProviders
2014-06-18T08:54:58Z
<p>Patrykl: /* Fully integrated Resource Providers */</p>
<hr />
<div>{{Fedcloud-tf:Menu}} {{TOC_right}} <br />
<br />
== EGI Federated Cloud Resource Providers ==<br />
<br />
EGI Federated Cloud resource providers are institutions and companies that contribute to the FedCloud providing access to their cloud infrastructure via the Federation. Resource providers are free to use any Cloud Management Framework (OpenNebula, OpenStack, etc...), the only requirement is that the CMF exposes interfaces compliant to the [[Fedcloud-tf:Technology:Architecture|FedCloud standards]]. <br />
<br />
== Join as a Resource Provider ==<br />
<br />
Every institution and company is invited to join the EGI Federated Cloud. The members of the EGI Federated Cloud have also the opportunity to join the [[Fedcloud-tf:FederatedCloudsTaskForce|EGI Federated Cloud Task Force]], contributing directly to the creation and implementation of the clouds federation. <br />
<br />
If you want to join the EGI Federated Cloud, you can send en email to the [mailto:fedcloud-tf@mailman.egi.eu EGI Federated Cloud Task Force], specifying the following information (fields with * are mandatory) <br />
<br />
*Name* <br />
*Institute* <br />
*Email address* <br />
*One paragraph long description of your organization <br />
*Envisaged timeline (is there a deadline to finish the setup? for how long do you wish to contribute to the EGI Federated project?) <br />
*Estimated number and size of machines that you may provide to EGI <br />
*Type of Cloud Management Framework you are using <br />
*Link to webpage, document or other online resource for further information<br />
<br />
== Cloud Management Frameworks ==<br />
<br />
The federation of IaaS Cloud resources in EGI is built upon the extensive autonomy of Resource Providers in terms of ownership of exposed resources. The different cloud providers may have different Cloud Managemnt Frameworks (ex. OpenNebula, OpenStack, etc...), the federation implies only that the CMF exposes a set of [[Fedcloud-tf:Technology:Architecture|common interfaces]] (ex. [[Fedcloud-tf:Technology:Architecture#VM_management_interface:_OCCI|OCCI for VM management]], [[Fedcloud-tf:Technology:Architecture#Data_management_interface:_CDMI|CDMI for Data management]], etc...) <br />
<br />
More in-depth information about the EGI Federated Cloud architecture and technology is provided [[Fedcloud-tf:Technology|here]]. <br />
<br />
=== Compatible CMFs ===<br />
<br />
The table below provides a list of the most famous CMFs and the status of the support of the EGI Federated Cloud interfaces. <br />
<br />
{| class="wikitable"<br />
|-<br />
! Cloud Mgmt. Fram. <br />
! Fed. AAI <br />
! Monitoring<ref>Monitoring happens passive, i.e. no active integration from the side of Cloud Management Frameworks necessary.</ref> <br />
! Accounting <br />
! Img. Mgmt. <br />
! OCCI <br />
! CDMI<br />
|-<br />
! OpenStack <br />
| Yes <br />
| Yes <br />
| Yes <br />
| Yes <br />
| Yes <br />
| Yes<br />
|-<br />
! OpenNebula <br />
| Yes <br />
| Yes <br />
| Yes <br />
| Yes <br />
| Yes <br />
| No<br />
|-<br />
! StratusLab <br />
| Yes <br />
| Yes <br />
| Yes <br />
| - <br />
| Yes<ref>StratusLab’s OCCI support is based on OpenNebula. Since StratusLab will discontinue its integration with OpenNebula (see below), the future of StratusLab OCCI interface is unknown at this point in time.</ref> <br />
| -<br />
|-<br />
! WNoDeS <br />
| Yes <br />
| Yes <br />
| Yes <br />
| - <br />
| - <br />
| -<br />
|-<br />
! Synnefo <br />
| Yes <br />
| Yes <br />
| Yes <br />
| Yes <br />
| Yes <br />
| Yes<br />
|}<br />
<br />
== Deployment ==<br />
<br />
Guides for deployment of the FedCloud interfaces for the most used Cloud Management Frameworks are reported below: <br />
<br />
*[[Fedcloud-tf:ResourceProviders:OpenNebula|OpenNebula]] <br />
*[[Fedcloud-tf:ResourceProviders:OpenStack|OpenStack]] <br />
*[[Fedcloud-tf:ResourceProviders:StratusLab|StratusLab]] <br />
*[[Fedcloud-tf:ResourceProviders:WNoDeS|WNoDeS]] <br />
*[[Fedcloud-tf:ResourceProviders:Synnefo|Synnefo]]<br />
<br />
== Current Resource Providers ==<br />
<br />
=== Fully integrated Resource Providers ===<br />
<br />
Sites that have a valid GOCDB entry, with all mandatory service types configured. Also either the Cloud compute service, or the Cloud storage service, or both must be listed properly. Monitoring via cloudmon.egi.eu must be working, and probes for all mandatory and minimum Cloud services operational. <br />
<br />
To get an impression of each site's availability and reliability, please have a look at [http://mon.egi.eu/myegi/sa/?view=2&graph=1&vo=37&profile=101&filters-value-Regions_or_Tiers=&filters-value-Sites=&production=1&preproduction=1&dateorperiod=pd&period=pM&startdate=17-03-2014&enddate=16-04-2014 MyEGI] <br />
<br />
'''Certification:''' Certification of Resource Providers is conducted by invoking [[PROC18|PROC18]]. PROC18 is in effect until the original [[PROC09|PROC09]] has been reconciled and updated to fully support Cloud Resources as a first class citizen. <br />
<br />
{| cellspacing="0" cellpadding="5" class="wikitable" style="border:1px solid black; text-align:left;"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | Affiliation <br />
! style="border-bottom:1px solid black;" | CC <br />
! style="border-bottom:1px solid black;" | Main contact <br />
! style="border-bottom:1px solid black;" | Deputies <br />
! style="border-bottom:1px solid black;" | Host Site <br />
! style="border-bottom:1px solid black;" | CMF <br />
! style="border-bottom:1px solid black;" | Certification <br />
! style="border-bottom:1px solid black;" | Resource Endpoints <br />
! style="border-bottom:1px solid black;" | Committed Production Launch resources <br />
! style="border-bottom:1px solid black;" | VM maximum size (memory, cpu, storage)<br />
|-<br />
| style="border-bottom:1px dotted silver;" | 100 Percent IT Ltd <br />
| style="border-bottom:1px dotted silver;" | UK <br />
| style="border-bottom:1px dotted silver;" | David Blundell <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | 100IT <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| bgcolor="lime" style="border-bottom:1px dotted silver;" | certified <br />
| style="border-bottom:1px dotted silver;" | OCCI - occi-api.100percentit.com <br />
| style="border-bottom:1px dotted silver;" | 120 cores with 128GB RAM and 16TB shared storage <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | UNIZAR / BIFI <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Ruben Valles <br />
| style="border-bottom:1px dotted silver;" | Jaime Ibar Yubero <br />
| style="border-bottom:1px dotted silver;" | BIFI <br />
| style="border-bottom:1px dotted silver;" | OpenStack (Grizzly &amp; Havana) <br />
| bgcolor="lime" style="border-bottom:1px dotted silver;" | [https://ggus.eu/ws/ticket_info.php?ticket=100239 certified] <br />
| style="border-bottom:1px dotted silver;" | <br />
OCCI - server4-epsh.unizar.es (Grizzly) <br />
<br />
OCCI - server4-eupt.unizar.es (Havana) <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
720 cores with 740GB RAM&nbsp;: 2 testbeds x 360 cores with 370GB RAM ( in Xeon servers 2xhexacore 24GB RAM local HD 500GB) <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
Name m1.xlarge, <br />
<br />
VCPUs 8, <br />
<br />
Root Disk 160 GB, <br />
<br />
Ephemeral Disk 0 GB, <br />
<br />
Total Disk 160 GB, <br />
<br />
RAM 16,384 MB <br />
<br />
|-<br />
| style="border-bottom:1px dotted silver;" | BSC <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Daniele Lezzi <br />
| style="border-bottom:1px dotted silver;" | Roger Rafanell <br />
| style="border-bottom:1px dotted silver;" | BSC-Cloud <br />
| style="border-bottom:1px dotted silver;" | Emotive <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | CDMI - bscgrid05.bsc.es <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | Cyfronet (NGI PL) <br />
| style="border-bottom:1px dotted silver;" | PL <br />
| style="border-bottom:1px dotted silver;" | Tomasz Szepieniec <br />
| style="border-bottom:1px dotted silver;" | Marcin Radecki, Jan Meizner <br />
| style="border-bottom:1px dotted silver;" | [https://next.gocdb.eu/portal/index.php?Page_Type=Site&id=966 CYFRONET-CLOUD] <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| bgcolor="lime" style="border-bottom:1px dotted silver;" | [https://ggus.eu/?mode=ticket_info&ticket_id=105027 certified] <br />
| style="border-bottom:1px dotted silver;" | OCCI - head.cloud.cyfronet.pl <br />
| style="border-bottom:1px dotted silver;" | 200 Cores, 400 GB RAM, 20 TB storage (images/VMs) at launch <br />
| style="border-bottom:1px dotted silver;" | 8 cores, 16GB of RAM, local HD 120 GB<br />
|-<br />
| style="border-bottom:1px dotted silver;" | FCTSG <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Esteban Freire <br />
| style="border-bottom:1px dotted silver;" | Iván Díaz <br />
| style="border-bottom:1px dotted silver;" | CESGA <br />
| style="border-bottom:1px dotted silver;" | OpenNebula <br />
| bgcolor="lime" style="border-bottom:1px dotted silver;" | [https://ggus.eu/ws/ticket_info.php?ticket=100948 certified] <br />
| style="border-bottom:1px dotted silver;" | OCCI - cloud.cesga.es <br />
| style="border-bottom:1px dotted silver;" | 296 Cores, 592GB RAM (37 Xeon servers with 8 cores, 16GB RAM and local HD 500GB). This infrastructure is used to run several core services for EGI.eu and our capacity is compromised due to that. <br />
| style="border-bottom:1px dotted silver;" | 8 cores, 16GB of RAM, local HD 470 GB<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CESNET <br />
| style="border-bottom:1px dotted silver;" | CZ <br />
| style="border-bottom:1px dotted silver;" | Miroslav Ruda <br />
| style="border-bottom:1px dotted silver;" | Boris Parak <br />
| style="border-bottom:1px dotted silver;" | CESNET-MetaCloud <br />
| style="border-bottom:1px dotted silver;" | OpenNebula <br />
| bgcolor="lime" style="border-bottom:1px dotted silver;" | [https://ggus.eu/ws/ticket_info.php?ticket=101197 certified] <br />
| style="border-bottom:1px dotted silver;" | OCCI - carach5.ics.muni.cz <br />
| style="border-bottom:1px dotted silver;" | 240 cores with 960 GB of RAM and 44 TB <br />
| style="border-bottom:1px dotted silver;" | 22 cores, 90 GB of RAM, approx. 800 GB of attached storage<br />
|-<br />
| style="border-bottom:1px dotted silver;" | FZ Jülich <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Björn Hagemeier <br />
| style="border-bottom:1px dotted silver;" | Shahbaz Memon <br />
| style="border-bottom:1px dotted silver;" | FZJ <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| bgcolor="yellow" style="border-bottom:1px dotted silver;" | [https://ggus.eu/index.php?mode=ticket_info&ticket_id=104882 In progress] <br />
| style="border-bottom:1px dotted silver;" | <br />
OCCI - egi-cloud.zam.kfa-juelich.de <br />
<br />
CDMI - swift.zam.kfa-juelich.de <br />
<br />
| style="border-bottom:1px dotted silver;" | 144 Cores <br />
294 GB RAM <br />
<br />
~18TB Object Storage <br />
<br />
~15TB block storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | GWDG <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Piotr Kasprzak <br />
| style="border-bottom:1px dotted silver;" | Ramin Yahyapour, Philipp Wieder <br />
| style="border-bottom:1px dotted silver;" | GoeGrid <br />
| style="border-bottom:1px dotted silver;" | OpenNebula <br />
| bgcolor="lime" style="border-bottom:1px dotted silver;" | [https://ggus.eu/index.php?mode=ticket_info&ticket_id=101437 certified] <br />
| style="border-bottom:1px dotted silver;" | <br />
OCCI - occi.cloud.gwdg.de <br />
<br />
CDMI - cdmi.cloud.gwdg.de <br />
<br />
| style="border-bottom:1px dotted silver;" | 192 Core, 768 GB RAM, 40 TB storage<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CSIC/IFCA <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Alvaro Lopez Garcia <br />
| style="border-bottom:1px dotted silver;" | Enol Fernandez, Pablo Orviz <br />
| style="border-bottom:1px dotted silver;" | IFCA-LCG2 <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| bgcolor="lime" style="border-bottom:1px dotted silver;" | [https://ggus.eu/index.php?mode=ticket_info&ticket_id=100990 certified] <br />
| style="border-bottom:1px dotted silver;" | OCCI - cloud.ifca.es <br />
| style="border-bottom:1px dotted silver;" | <br />
32 nodes x 8 vcpus x 16GB RAM <br />
<br />
36 nodes x 24 vcpus x 48GM RAM <br />
<br />
34 nodes x 32 vcpus x 128GB RAM <br />
<br />
1 node x 80 vcpus x 1TB RAM <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | GRNET <br />
| style="border-bottom:1px dotted silver;" | GR <br />
| style="border-bottom:1px dotted silver;" | Kostas Koumantaros <br />
| style="border-bottom:1px dotted silver;" | Athanasia Assiki <br />
| style="border-bottom:1px dotted silver;" | HG-09-Okeanos-Cloud <br />
| style="border-bottom:1px dotted silver;" | Synnefo <br />
| bgcolor="lime" style="border-bottom:1px dotted silver;" | [https://ggus.eu/index.php?mode=ticket_info&ticket_id=105149 certified] <br />
| style="border-bottom:1px dotted silver;" | OCCI - okeanos-occi2.hellasgrid.gr <br />
| style="border-bottom:1px dotted silver;" | 20 CPUS, 40 GB RAM and 400GB Storage <br />
| style="border-bottom:1px dotted silver;" | 24 Cores, 192GB RAM, 500GB HD&nbsp;<br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Martin Bobak <br />
| style="border-bottom:1px dotted silver;" | IISAS-FedCloud <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| bgcolor="lime" style="border-bottom:1px dotted silver;" | [https://ggus.eu/index.php?mode=ticket_info&ticket_id=102848 certified] <br />
| style="border-bottom:1px dotted silver;" | OCCI - nova2.ui.savba.sk <br />
| style="border-bottom:1px dotted silver;" | 96 cores, 3GB RAM per core, 16TB storage <br />
| style="border-bottom:1px dotted silver;" | m1.xlarge: 8 VCPUs, 16GB RAM, 160GB HD<br />
|-<br />
| rowspan="2" style="border-bottom:1px dotted silver;" | INFN - Catania <br />
| rowspan="2" style="border-bottom:1px dotted silver;" | IT <br />
| rowspan="2" style="border-bottom:1px dotted silver;" | Roberto Barbera <br />
| rowspan="2" style="border-bottom:1px dotted silver;" | Giuseppe La Rocca, Diego Scardaci <br />
| style="border-bottom:1px dotted silver;" | INFN-CATANIA-NEBULA <br />
| style="border-bottom:1px dotted silver;" | OpenNebula <br />
| bgcolor="lime" style="border-bottom:1px dotted silver;" | [https://ticketing.cnaf.infn.it/checklist-new/modules/xhelp/ticket.php?id=16572 certified] <br />
| style="border-bottom:1px dotted silver;" | OCCI - nebula-server-01.ct.infn.it <br />
| rowspan="2" style="border-bottom:1px dotted silver;" | 120 cores and about 6 TB of storage <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN-CATANIA-STACK <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| bgcolor="lime" style="border-bottom:1px dotted silver;" | [https://ticketing.cnaf.infn.it/checklist-new/modules/xhelp/ticket.php?id=16524 certified] <br />
| style="border-bottom:1px dotted silver;" | OCCI - stack-server-01.ct.infn.it <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | KTH <br />
| style="border-bottom:1px dotted silver;" | SE <br />
| style="border-bottom:1px dotted silver;" | Zeeshan Ali Shah <br />
| style="border-bottom:1px dotted silver;" | Ake Edlund <br />
| style="border-bottom:1px dotted silver;" | KTH-CLOUD <br />
| style="border-bottom:1px dotted silver;" | OpenNebula <br />
| bgcolor="lime" style="border-bottom:1px dotted silver;" | [https://ggus.eu/ws/ticket_info.php?ticket=101028 certified] <br />
| style="border-bottom:1px dotted silver;" | OCCI - egi.cloud.pdc.kth.se <br />
| style="border-bottom:1px dotted silver;" | 192 Cores, 384 GB RAM with 6 TB of storage <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - Bari <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Marica Antonacci <br />
| style="border-bottom:1px dotted silver;" | Giacinto Donvito, Stefano Nicotri, Vincenzo Spinoso <br />
| style="border-bottom:1px dotted silver;" | PRISMA-INFN-BARI <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| bgcolor="lime" style="border-bottom:1px dotted silver;" | [https://ticketing.cnaf.infn.it/checklist-new/modules/xhelp/ticket.php?id=16332%E2%80%9D certified] <br />
| style="border-bottom:1px dotted silver;" | OCCI - prisma-cloud.ba.infn.it <br />
| style="border-bottom:1px dotted silver;" | 300 Virtual CPU, 600GB of RAM and about 50TB of Storage. <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN-PADOVA <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Marco Verlato <br />
| style="border-bottom:1px dotted silver;" | Eric Frizziero, Cristina Aiftimiei <br />
| style="border-bottom:1px dotted silver;" | INFN-PADOVA-STACK <br />
| style="border-bottom:1px dotted silver;" | OpenStack Havana <br />
| bgcolor="lime" style="border-bottom:1px dotted silver;" | [https://dl.dropboxusercontent.com/u/527460/Clouds/EGI-FedCloud/Certification-ticket.pdf certified] <br />
| style="border-bottom:1px dotted silver;" | OCCI - egi-cloud.pd.infn.it <br />
| style="border-bottom:1px dotted silver;" | 48 cores with 96 GB RAM and ~5TB of Storage, up to 24 public IPs <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | MTA SZTAKI <br />
| style="border-bottom:1px dotted silver;" | HU <br />
| style="border-bottom:1px dotted silver;" | Sandor Acs <br />
| style="border-bottom:1px dotted silver;" | Peter Kotcauer, Gabor Kecskemeti <br />
| style="border-bottom:1px dotted silver;" | SZTAKI <br />
| style="border-bottom:1px dotted silver;" | OpenNebula <br />
| bgcolor="lime" style="border-bottom:1px dotted silver;" | certified <br />
| style="border-bottom:1px dotted silver;" | OCCI - occi.hpcc.sztaki.hu <br />
| style="border-bottom:1px dotted silver;" | 128 core, 128 GB RAM and 6TB shared storage <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | TUBITAK ULAKBIM <br />
| style="border-bottom:1px dotted silver;" | TR <br />
| style="border-bottom:1px dotted silver;" | Feyza Eryol <br />
| style="border-bottom:1px dotted silver;" | Onur Temizsoylu <br />
| style="border-bottom:1px dotted silver;" | TR-FC1-ULAKBIM <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| bgcolor="lime" style="border-bottom:1px dotted silver;" | [https://ggus.eu/?mode=ticket_info&ticket_id=104976 certified] <br />
| style="border-bottom:1px dotted silver;" | OCCI - fcctrl.ulakbim.gov.tr <br />
| style="border-bottom:1px dotted silver;" | 224 core with 672 GB memory and 10 TB shared storage (to be expanded Jun'14 336 core with 1008GB memory and &gt;10TB shared storage) <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | UKIM <br />
| style="border-bottom:1px dotted silver;" | MK <br />
| style="border-bottom:1px dotted silver;" | Boro Jakimovski <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | MK-04-FINKICLOUD <br />
| style="border-bottom:1px dotted silver;" | OpenNebula <br />
| bgcolor="lime" style="border-bottom:1px dotted silver;" | [https://ggus.eu/?mode=ticket_info&ticket_id=105187 certified] <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | 18 nodes x 12 CPU cores with HT (18 x 24 locical cores), and each node has 24GB RAM. Storage is 17TB. <br />
| style="border-bottom:1px dotted silver;" | <br />
|}<br />
<br />
=== Integrating Resource Providers ===<br />
<br />
Reporting period: 1 Feb 2014 - 28 Feb 2014 <br />
<br />
Sites that have a valid GOCDB entry, and at least one service type listed and monitored via cloudmon.egi.eu. <br />
<br />
{| cellspacing="0" cellpadding="5" style="border:1px solid black; text-align:left;" class="wikitable sortable"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | Affiliation <br />
! style="border-bottom:1px solid black;" | CC <br />
! style="border-bottom:1px solid black;" | Main Contact <br />
! style="border-bottom:1px solid black;" | Deputies <br />
! style="border-bottom:1px solid black;" | Host Site <br />
! style="border-bottom:1px solid black;" | CMF <br />
! style="border-bottom:1px solid black;" | Certification <br />
! style="border-bottom:1px solid black;" | Resource Endpoints <br />
! style="border-bottom:1px solid black;" | Comment<br />
|-<br />
| style="border-bottom:1px dotted silver;" | IN2P3-CC <br />
| style="border-bottom:1px dotted silver;" | FR <br />
| style="border-bottom:1px dotted silver;" | Gilles MATHIEU <br />
| style="border-bottom:1px dotted silver;" | Mattieu Puel <br />
| style="border-bottom:1px dotted silver;" | IN2P3-CC <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | OCCI - ccocci.in2p3.fr <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | KISTI <br />
| style="border-bottom:1px dotted silver;" | KR <br />
| style="border-bottom:1px dotted silver;" | Soonwook Hwang <br />
| style="border-bottom:1px dotted silver;" | Sangwan Kim, Taesang Huh, Jae-Hyuck Kwak <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | 64 cores with 256 GB RAM and 6TB HDD<br />
|}<br />
<br />
=== Interested Resource Providers ===<br />
<br />
{| cellspacing="0" cellpadding="5" style="border:1px solid black; text-align:left;" class="wikitable sortable"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | Affiliation <br />
! style="border-bottom:1px solid black;" | CC <br />
! style="border-bottom:1px solid black;" | Representative <br />
! style="border-bottom:1px solid black;" | Deputies <br />
! style="border-bottom:1px solid black;" | CMF <br />
! style="border-bottom:1px solid black;" | Integration plans <br />
! style="border-bottom:1px solid black;" | Comment<br />
|-<br />
| style="border-bottom:1px dotted silver;" | IICT-BAS <br />
| style="border-bottom:1px dotted silver;" | BG <br />
| style="border-bottom:1px dotted silver;" | Emanouil Atanassov <br />
| style="border-bottom:1px dotted silver;" | Todor Gurov <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - Napoli <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Silvio Pardi <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - CNAF <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Elisabetta Ronchieri <br />
| style="border-bottom:1px dotted silver;" | Davide Salomoni, Andrea Cristofori <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - Torino <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Andrea Guarise <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | CETA-CIEMAT <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Guillermo Díaz Herrero, <br />
| style="border-bottom:1px dotted silver;" | Miguel Ángel Díaz, Abel Paz <br />
| style="border-bottom:1px dotted silver;" | OpenStack Havana <br />
| style="border-bottom:1px dotted silver;" | Scheduled - Apr '14 <br />
| style="border-bottom:1px dotted silver;" | 112 cores, 224 GB RAM, 2.5 TB of disk<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CETA-CIEMAT <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Rubio Montero <br />
| style="border-bottom:1px dotted silver;" | Rafael Mayo García, Manuel Aurelio Rodríguez Pascual <br />
| style="border-bottom:1px dotted silver;" | OpenNebula <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | SURFsara <br />
| style="border-bottom:1px dotted silver;" | NL <br />
| style="border-bottom:1px dotted silver;" | Jhon Masschelein <br />
| style="border-bottom:1px dotted silver;" | Maurice Bouwhuis, Machiel Jansen <br />
| style="border-bottom:1px dotted silver;" | OpenNebula <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | CRNS/IPHC <br />
| style="border-bottom:1px dotted silver;" | FR <br />
| style="border-bottom:1px dotted silver;" | Jérôme Pansanel <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | ISRGrid/IUCC <br />
| style="border-bottom:1px dotted silver;" | IL <br />
| style="border-bottom:1px dotted silver;" | Yossi Baruch <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | DESY <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Patrick Furhmann <br />
| style="border-bottom:1px dotted silver;" | Paul Millar <br />
| style="border-bottom:1px dotted silver;" | dCache <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | Storage only<br />
|-<br />
| style="border-bottom:1px dotted silver;" | STFC/RAL <br />
| style="border-bottom:1px dotted silver;" | UK <br />
| style="border-bottom:1px dotted silver;" | Ian Collier <br />
| style="border-bottom:1px dotted silver;" | Frazer Barnsley, Alan Kyffin <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | STFC/RAL Harwell Science <br />
| style="border-bottom:1px dotted silver;" | UK <br />
| style="border-bottom:1px dotted silver;" | Jens Jensen <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | Castor <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | Cloud storage only<br />
|-<br />
| style="border-bottom:1px dotted silver;" | IFAE / PIC <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Victor Mendez <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | SAGrid <br />
| style="border-bottom:1px dotted silver;" | ZA <br />
| style="border-bottom:1px dotted silver;" | Bruce Becker <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | CSC <br />
| style="border-bottom:1px dotted silver;" | FI <br />
| style="border-bottom:1px dotted silver;" | Jura Tarus <br />
| style="border-bottom:1px dotted silver;" | Luís Alves, Ulf Tigerstedt, Kalle Happonen <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | Planned - H1 '14 <br />
| style="border-bottom:1px dotted silver;" | Status: Testing resource integration<br />
|-<br />
| style="border-bottom:1px dotted silver;" | SRCE <br />
| style="border-bottom:1px dotted silver;" | HR <br />
| style="border-bottom:1px dotted silver;" | Emir Imamagic <br />
| style="border-bottom:1px dotted silver;" | Luko Gjenero <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | Planned - April 2014 <br />
| style="border-bottom:1px dotted silver;" | Status: Deploying OpenStack cluster, investigating storage options<br />
|-<br />
| style="border-bottom:1px dotted silver;" | GridPP <br />
| style="border-bottom:1px dotted silver;" | UK <br />
| style="border-bottom:1px dotted silver;" | Adam Huffman <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | Hosted at Imperial College<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Polythecnic University of Valencia, I3M <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Ignacio Blanquer <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | OpenNEbula &amp; OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
In ONE: 60 cores, 192 GBs de RAM.<br />
In OST: 40 cores, 36 GBs de RAM.<br />
<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CRNS/IN2P3-LAL <br />
| style="border-bottom:1px dotted silver;" | FR <br />
| style="border-bottom:1px dotted silver;" | Michel Jouvin <br />
| style="border-bottom:1px dotted silver;" | Mohammed Araj <br />
| style="border-bottom:1px dotted silver;" | StratusLab <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|}<br />
<br />
=== Testbed ===<br />
<br />
More information about the Resource Providers belonging to the FedCloud Testbed are available [[Fedcloud-tf:Testbed|here]]. <br />
<br />
== References ==<br />
<br />
<references /></div>
Patrykl
https://wiki.egi.eu/w/index.php?title=Fedcloud-tf:WorkGroups:Scenario3&diff=67037
Fedcloud-tf:WorkGroups:Scenario3
2014-05-01T08:25:21Z
<p>Patrykl: /* Resource providers LDAP servers */</p>
<hr />
<div>{{Fedcloud-tf:Menu}} {{Fedcloud-tf:WorkGroups:Menu}} {{TOC_right}} <br />
<br />
== Scenario 3: Integrating information from multiple resource providers ==<br />
<br />
<font color="red">Leader: David Wallom, OeRC</font> <br />
<br />
== Scenario collaborators ==<br />
<br />
{| border="1"<br />
|-<br />
! Role <br />
! Institution <br />
! Name<br />
|-<br />
| Scenario leader <br />
| OeRC <br />
| David Wallom<br />
|-<br />
| Collaborator <br />
| OeRC <br />
| Matteo Turilli<br />
|-<br />
| Collaborator <br />
| EGI.eu <br />
| Peter Solagna<br />
|-<br />
| Collaborator <br />
| INFN <br />
| Elisabetta Ronchieri<br />
|}<br />
<br />
== Information that should be published by a cloud service ==<br />
<br />
The following are the information identified during the TF F2F meeting: <br />
<br />
'''Please add more points edit/comments the list''' <br />
<br />
#What is the name of the resource and what type of interface can I use to manage instances on the resource? <br />
##What is the endpoint I should contact to interact with the cloud management interface? (E.g. the url of the web-service/portal) <br />
#What are the AuthN and AuthZ rules that operate on your cloud? <br />
#What instances are already installed on the resource and am I allowed to upload my own instances? <br />
#If I am able to upload instances what format of instances does the resource accept? <br />
#Is there a data interface available and if so what is it? <br />
#What is the overall size of the resource? <br />
#Are instance templates defined that limit the choice of instance scales I am able to run? <br />
#What type of virtual network can I establish on the resource? <br />
#Does the resource support cloud scalability through managed bursting to another external provider?<br />
<br />
The following are questions on the dynamic information; <br />
<br />
#I have a virtual instance that requires X,Y,Z resources, does your cloud have A&gt;X, B&gt;Y,C&gt;Z resource available? <br />
#My instance is short lived is its utilisation of resources going to be captured in the information system such that overprovisioning will/will not occur? <br />
#What is the charging scheme and how much will using your cloud cost?<br />
<br />
== More information to publish==<br />
=== Storage capabilities===<br />
In this section the storage capabilities information to be published are analyzed not (necessarily) considering the possibilities in the GLUE2 schema currently available.<br />
<br />
==== Relevant information ====<br />
The following table contains what is possible to insepct through the OCCI 1.2 spec:<br />
<br />
{| border="1" <br />
! Attribute<br />
!Comment<br />
|-<br />
|occi.storage.size<br />
|Size of the storage resource instance<br />
|-<br />
|occi.storage.status<br />
|Status of the storage instance (online,offline,backup,snapshot..)<br />
|}<br />
<br />
These attributes are describing an actual instance, which is not going to be published in the information system. What we want to advertise are the capabilities that could be requested to the cloud service.<br />
<br />
{| border="1" <br />
! Attribute<br />
!Comment<br />
|-<br />
|Max Storage installed in the site<br />
|This is the total amount of disk space that the cloud site provides as virtualized storage resource.<br />
|-<br />
|Max size of a single virtual storage resource<br />
|This is the max size of storage <br />
|-<br />
|Interfaces<br />
|How users and VMs can interact with the storage resources. E.g. CDMI.<br />
|-<br />
|Storage throughput<br />
|Max I/O speed allowed to VMs writing/reading to the storage area.<br />
|-<br />
|Capabilities<br />
|This are additional capabilities of a storage service, on top of the create/delete/link. Examples could be: backup or snapshot.<br />
|}<br />
<br />
''Please add more options in the table'', or participate to the discussion in the FedCloud task force mailing list.<br />
<br />
=== Network capabilities ===<br />
<br />
The following table contains some of the Network capabilities that could be advertised through the information system:<br />
<br />
{| border="1" <br />
! Attribute<br />
!Comment<br />
|-<br />
|Internal Bandwith<br />
|The maximum bandwith available between the virtual machines in the cloud<br />
|-<br />
|Outbound bandwith<br />
|Bandwith that can be allocated to each virtual machine outside the cloud<br />
|-<br />
|Average latency<br />
|If the VM are deployed on different physical sites the latency between the instances can be higher and affect network performances. Low values of network latency assure that the virtual machine are physically instantiated in the same network.<br />
|-<br />
|IPv6 enabled<br />
|Can the virtual network be configured for IPv6?<br />
|-<br />
|Virtual private network enabled<br />
|Is it possible to set up a virtual private network, in order to increase the security and the isolation of the instantiated machines?<br />
|}<br />
<br />
== How to render those information in GLUE2 ==<br />
<br />
'''Note''': BDII service speaks only GLUE2. The Cloud information need to be squeezed in the current set of GLUE2 Entities. If the schema is extended to include Cloud-specific entities, it needs to be officially approved by OGF and implemented in the various ''glue-schema'' ''glue-validator'' components deployed with the BDII. <br />
<br />
=== Use the currently available GLUE2.0 entities ===<br />
<br />
Currently the GLUE2 includes two main conceptual models for Computing Elements and Storage Elements. These elements should be used to model the Cloud capabilities remaining compliant to the current GLUE2.0 schema. <br />
<br />
==== Capabilities for cloud services ====<br />
<br />
''Note: '''bold''' capabilities are new, not already in GLUE2 specification. Adding new capabilities do not requires an extension of the GLUE2 schema.<br>'' ''Please:'' add new ''high level'' capabilities if you feel that something is missing. These capabilities are used in the following entities. <br />
<br />
{| border="1"<br />
|-<br />
! Capability <br />
! Description<br />
|-<br />
| '''cloud.VMmanagement''' <br />
| This is the '''standard''' capability that every cloud service should publish if it allows to instantiate/suspend/delete virtual machines<br />
|-<br />
| '''cloud.virtualImagesUpload''' <br />
| This is the capability that allows users to upload their own virtual images through the cloud interface<br />
|-<br />
| security.authentication/security.authorization <br />
| I would leave those capability, given that every cloud provider has authentication<br />
|-<br />
| <br />
| <br />
|}<br />
<br />
==== Computing Service entity description ====<br />
<br />
*This Service is used to describe the computing resource itself, decoupling from the Grid endpoint. <br />
*Attributes that need to be provided by the resource providers are in '''bold'''<br />
<br />
{| border="1"<br />
|-<br />
! Attribute <br />
! Type <br />
! Multiplicity <br />
! Description<br />
|-<br />
| Creation time <br />
| .. <br />
| .. <br />
| ..<br />
|-<br />
| Validity <br />
| .. <br />
| .. <br />
| ..<br />
|-<br />
| ID <br />
| .. <br />
| .. <br />
| ..<br />
|-<br />
| '''Name''' <br />
| String <br />
| 1 <br />
| Human readable name. It could be used to fill the information: "what is the name of the resource"<br />
|-<br />
| OtherInfo <br />
| String <br />
| n <br />
| Placeholder to add information that does not fit into any other attribute. Cloud information that cannot be mapped in other attributes could be added here.<br />
|-<br />
| '''Capability''' <br />
| Capability_t <br />
| n <br />
| This attribute lists the capabilities available for this service, currently the type ''Capability_t'' does not include specific cloud capabilities. Being an open enum type it can be extended with additional capabilities. Currently some of the already available capabilities are: security.accounting, security.authentication or information.logging. We could consider to add capabilities like "''cloud.vm.uploadImage''" to add the information in the quesiton: "am I allowed to upload my own instances?". To identify cloud services there would be the need to add a new capability, common to all the cloud services regardless of their specific capabilities, like: "cloud.managementSystem" (nb: stupid example). ''Resource providers, in this design stage, could provide just descriptions of the capabilities they would like to publish. I (Peter) will try to group them proposing some labels for the different capabilities.''<br />
|-<br />
| '''Type''' <br />
| ServiceType_t <br />
| 1 <br />
| Type of service in a reverse namespace model, e.g.: org.glite.lb or org.glite.wms. It could be ''org.opennebula'', ''org.stratuslab'' or ''com.cloudsigma''<br />
|}<br />
<br />
There are, then, a number of more attributes (static and dynamic) that could be used by cloud services, like: StatusInfo,TotalJobs, RunningJobs etc etc. Please note that '''Location''' is a GLUE2 entity that can be linked to the Service entity, this could answer to the ''"Where is located the cloud facility?"'' question. <br />
<br />
=== ComputingEndpoint description ===<br />
<br />
Every ComputingService has associated '''one or more''' Computing Endpoint. The endpoint is used to create, control am monitor computational activities.<br> <br />
<br />
*Resource providers should provide the information to create one endpoint for each interface they're exposing for the cloud service.<br />
<br />
{| border="1"<br />
|-<br />
! Attribute <br />
! Type <br />
! Multiplicity <br />
! Description<br />
|-<br />
| CreationTime <br />
| .. <br />
| .. <br />
| I will skip the most general, attributes like OtherInfo and Capability(described above).<br />
|-<br />
| '''URL''' <br />
| URI <br />
| 1 <br />
| Network location of the endpoint.<br />
|-<br />
| '''Capability''' <br />
| Capability_t <br />
| 0..n <br />
| It's the same field of the Service entity. Some capability could be interface-specific. I would replicate all the general capability also for this instance.<br />
|-<br />
| '''Technology''' <br />
| EndpointTechnology_t <br />
| 1 <br />
| Examples are "webservice" and "corba". We could add "webportal" or something like this to clarify that the endpoint refers to a web application.<br />
|-<br />
| '''InterFaceName''' <br />
| InterFaceName_t <br />
| 1 (mandatory) <br />
| The interface in the cloud case could be ''OCCI'', ''EC2'', ''jclouds'' or "webinterface". This can answer to the question: "what type of interface can I use to manage instances on the resource?"<br />
|-<br />
| '''InterfaceVersion''' <br />
| .. <br />
| .. <br />
| No description needed.<br />
|-<br />
| '''Supported profile''' <br />
| URI <br />
| * <br />
| We can define, here, a set of profiles for the authN/authZ of the users, like ''uri:sec:x509''.<br />
|}<br />
<br />
==== ExecutionEnvironment ====<br />
<br />
The ExecutionEnvironment class describes the hardware and operating system environment in which a job will run. It could be used to describe the VM images already available in the Cloud service. <br />
<br />
{| border="1"<br />
|-<br />
! Attribute <br />
! Type <br />
! Multiplicity <br />
! Description<br />
|-<br />
| '''Platform''' <br />
| Platform_t <br />
| 1 <br />
| The platform atchitecture, can be: amd64,i386,itanum,powerpc,sparc<br />
|-<br />
| TotalInstances/used instances <br />
| - <br />
| - <br />
| These attributes are not relevant in a cloud environment, where the execution environment are deployed dynamically.<br />
|-<br />
| PhysicalCPUs <br />
| UInt32 <br />
| 0..1 <br />
| The physical CPUs are not relevant - I would say- in a virtualised environment.<br />
|-<br />
| '''LogicalCPUs''' <br />
| UInt32 <br />
| 0..1 <br />
| This attribute could be used to express the '''maximum''' number of cores that is possible to instantiate in a single VM of this type (likely it will be common to all the execution environments of the same cloud service).<br />
|-<br />
| '''MainMemorySize''' <br />
| UInt64 <br />
| 1 <br />
| Max physical memory that is possible to instantiate on a single VM.<br />
|-<br />
| <br />
*'''OSFamily''' <br />
*'''OSName''' <br />
*'''OSVersion'''<br />
<br />
| (*) <br />
| 1 <br />
| Attributes which define the operating system available. There will be an execution environment for every virtual machine available in the cloud service. We should define some placeholders to create an ExecutionEnvironment ''stub'' to describe the max cores/memory for the virtual machines uploaded by a user.<br />
|}<br />
<br />
=== Deploy a new set of entities ===<br />
<br />
This is the next step: define cloud specific GLUE entities to extend the GLUE2 schema in order to publish the cloud services in a standard way. <!-- What to model?<br />
What is the name of the resource and what type of interface can I use to manage instances on the resource?<br />
What is the endpoint I should contact to interact with the cloud management interface? (E.g. the url of the web-service/portal) <br />
What are the AuthN and AuthZ rules that operate on your cloud?<br />
What instances are already installed on the resource and am I allowed to upload my own instances?<br />
If I am able to upload instances what format of instances does the resource accept?<br />
Is there a data interface available and if so what is it?<br />
What is the overall size of the resource?<br />
Are instance templates defined that limit the choice of instance scales I am able to run?<br />
What type of virtual network can I establish on the resource?<br />
Does the resource support cloud scalability through managed bursting to another external provider? <br />
<br />
The following are questions on the dynamic information;<br />
<br />
I have a virtual instance that requires X,Y,Z resources, does your cloud have A>X, B>Y,C>Z resource available?<br />
My instance is short lived is its utilisation of resources going to be captured in the information system such that overprovisioning will/will not occur?<br />
What is the charging scheme and how much will using your cloud cost? <br />
--> <br />
<br />
= Technical implementation =<br />
<br />
For a first demo the best technical choice is to go for openldap, which is available in almost all the *nix machines in the world. On top of that, openldap is the server used by the gLite BDIIs, therefore it would be easy to use the same configuration files set-up used for the GRIS or the GIIS. <br />
<br />
* Host of the ldap server: '''ldap://test03.egi.cesga.es:2170'''<br />
*Use the GLUE20.schema in the ''slapd.conf'' file to enable all the GLUE2.0 entities.<br />
<br />
== Resource providers LDAP servers ==<br />
<br />
'''!!! NEW&nbsp;!!!:''' ''IMPORTANT'': <span style="color:#DC143C">Fill the table with the address of the LDAP server set up as an information provider for your test bed.</span> <br />
<br />
{| border="1"<br />
|-<br />
! RP Name <br />
! Resource Centre <br />
! Address of the LDAP server "ldap://hostname:2170"<br />
|-<br />
| CESNET <br />
| CESNET Cloud <br />
| ldap://carach5.ics.muni.cz:2170<br />
|-<br />
| KTH <br />
| KTH Cloud <br />
| ldap://egi.cloud.pdc.kth.se:2170<br />
|-<br />
| GWDG <br />
| GWDG Cloud <br />
| ldap://one.cloud.gwdg.de:2170<br />
|-<br />
| SARA <br />
| SARA Cloud <br />
| ldap://bdii.cloud.sara.nl:2170<br />
|-<br />
| CESGA<br />
| CESGA Cloud<br />
| ldap://ui.egi.cesga.es:2170<br />
|-<br />
| CYFRONET<br />
| CYFRONET Cloud<br />
| ldap://head.cloud.cyf-kr.edu.pl:2170&nbsp;<br />
|-<br />
| TCD<br />
| TCD Cloud<br />
| ldap://cagnode42.cs.tcd.ie:2170<br />
|-<br />
|GRNET<br />
|GRNET_OKEANOS<br />
|ldap://okeanos-is.hellasgrid.gr:2170 <br />
|-<br />
| FZJ<br />
| FZJ Testbed<br />
| ldap://egi-cloud.zam.kfa-juelich.de:2170<br />
|-<br />
| CC-IN2P3<br />
| CC-IN2P3 Cloud<br />
| ldap://cccldbdii01.in2p3.fr:2170<br />
|-<br />
| INFN CNAF<br />
| WNoDeS Cloud<br />
| ldap://test-wnodes-is.cnaf.infn.it:2170<br />
|-<br />
| CSIC<br />
| CSOC Scientific Cloud<br />
| ldap://cloud.ifca.es:2170<br />
|}<br />
<br />
'''EGI Community Forum 2012 demo''' <br />
<br />
{| border="1"<br />
|-<br />
! RP Name <br />
! RP contact name <br />
! Resource Centre name to be published (was Site Name) <br />
! Country <br />
! Capabilities to be published (specify the endpoints supporting the capabilities!) <br />
! Other info to publish <br />
! VM Manager <br />
! V.Images available (OSFamily,OSName,OSVersion) <br />
! Max cores <br />
! Max CPU speed <br />
! Max RAM<br />
|-<br />
| CESNET <br />
| Miroslav Ruda <br />
| CESNET Cloud <br />
| Czech Republic <br />
| cloud.managementSystem, cloud.vm.uploadImage, cloud.data.cdmi <br />
| <br />
| XEN <br />
| 1.) Linux, OpenSUSE, 11.4<br>2.) Linux, Debian, 6.0.3 <br />
| 24 <br />
| <br />
| 96GB<br />
|-<br />
| KTH <br />
| Zeeshan Ali&nbsp;Shah <br />
| KTH-PDC Cloud <br />
| Sweden <br />
| cloud.managementSystem, cloud.vm.customimage, cloud.data.cdmi <br />
| <br />
| <br />
| <br />
| <br />
| <br />
| <br />
|-<br />
| GWDG <br />
| Piotr Kasprzak <br />
| GWDG Cloud <br />
| Germany <br />
| cloud.managementSystem, cloud.vm.uploadImage <br />
| <br />
| KVM <br />
| 1.) Linux, Scientific Linux, 6.1<br>2.) Linux, Ubuntu, 11.10 <br />
| 8 <br />
| 2.4 GHZ <br />
| 16GB<br />
|-<br />
| CYFRONET <br />
| Jan Meizner <br />
| CYFRONET Cloud <br />
| Poland <br />
| cloud.managementSystem, cloud.vm.uploadImage <br />
| <br />
| KVM <br />
| <br />
| 24 <br />
| <br />
| 48GB<br />
|-<br />
| CESGA <br />
| Alvaro Simon <br />
| CESGA Cloud <br />
| Spain <br />
| cloud.managementSystem, cloud.vm.customimage <br />
| <br />
| KVM <br />
| 1.) Linux, Scientific Linux, 5.5 <br />
| 264<br />
| 2.6 GHZ<br />
| 264GB<br />
|}<br />
<br />
== Distributed implementation ==<br />
<br />
Publishing correct information in the information system must be responsibility of the resource provider. To build a decentralized information system there is the need for:<br />
* Central information system instance that can pull the information from the resource information provider<br />
* Distributed information system to be queried by the central one (one instance for every cloud provider)<br />
<br />
<br />
A possible strategy is to base everything on the Top-BDII, which is the currently available technology. Ideally one LDAP server is sufficient for the resource providers. The Top-BDII can be configured to get the data from different LDAP servers, and merge them.<br />
<br />
Pros:<br />
* No need to develope transport and updating mechanisms <br />
* Resource providers need only to produce one LDIF file and load it into an LDAP server.<br />
Cons:<br />
* Top-BDII showed bad performances in publishing dynamic data (not an issue for testing as long as there are few resource providers)<br />
* We must publish 100% GLUE2 compliant information<br />
<br />
'''Find [[Fedclouds BDII instructions|here]] some guidelines for the ldap installation.'''<br />
<br />
=== Example queries ===<br />
<br />
# Get all the endpoints published by the resource providers, with the interface name and the version.<br />
<source lang=bash><br />
$ ldapsearch -x -H ldap://test03.egi.cesga.es:2170 -b o=glue '(objectClass=GLUE2Endpoint)' | perl -p00e 's/\r?\n //g' | grep -E 'GLUE2EndpointURL|GLUE2EndpointInterfaceName|GLUE2EndpointInterfaceVersion|dn\:' | awk '{printf("%s%s", $0, (NR%4 ? " === " : "\n"))}' | awk '{print ""$2" "$5" "$8" "$11}' | awk -F "GLUE2DomainID=" '{print $2}' | awk -F "," '{print $1 " "$3}' | awk '{print $1" "$4" "$3" "$5}' | sort<br />
<br />
CC-IN2P3 OCCI 1.1 https://ccocci.in2p3.fr:8788<br />
CESGA OCCI 1.1 http://cloud.cesga.es:3200<br />
CESGA OCCI 1.1 http://meghacloud.cesga.es:3200<br />
CESGA OCCI 1.1 https://cloud.cesga.es:3202<br />
CESGA OCCI 1.1 https://meghacloud.cesga.es:3202<br />
CESNET CDMI 1.0 https://carach3.ics.muni.cz:8080/<br />
CESNET OCA 3.4.1 https://carach5.ics.muni.cz:6443/RPC2<br />
CESNET OCCI 0.8 https://carach5.ics.muni.cz:9443/<br />
CESNET OCCI 1.1 http://carach5.ics.muni.cz:3333/<br />
CESNET OCCI 1.1 https://carach5.ics.muni.cz:10443/<br />
CESNET Sunstone 3.4.1 https://carach5.ics.muni.cz/<br />
csTCDie OCCI 1.1 https://cagnode42.cs.tcd.ie<br />
csTCDie XML-RPC 1.4 https://cagnode42.cs.tcd.ie:2634<br />
CYFRONET OCCI 1.1 http://cloud-lab.grid.cyf-kr.edu.pl:3200/<br />
CYFRONET OCCI 1.1 https://cloud-lab.grid.cyf-kr.edu.pl:3443/<br />
FZJ OCCI 1.1 https://egi-cloud.zam.kfa-juelich.de:8788/<br />
GRNET_OKEANOS OCCI 1.1 http://okeanos-occi.hellasgrid.gr:8888<br />
GWDG CDMI 1.0.1 http://cdmi.cloud.gwdg.de:4001<br />
GWDG CDMI 1.0.1 https://cdmi.cloud.gwdg.de:4000<br />
GWDG OCCI 0.8 http://occi.cloud.gwdg.de:3400<br />
GWDG OCCI 1.1 http://occi.cloud.gwdg.de:3200<br />
GWDG OCCI 1.1 http://occi.cloud.gwdg.de:5000<br />
GWDG OCCI 1.1 https://occi.cloud.gwdg.de:3100<br />
INFN_CNAF OCCI 1.1 https://test-wnodes-web01.cnaf.infn.it:8443/<br />
SARA OCCI 0.8 https://occi.cloud.sara.nl/<br />
SARA OCCI 1.1 https://occi11.cloud.sara.nl/<br />
</source><br />
<br />
= Implementation of the cloud service types in GOCDB =<br />
<br />
GOCDB is the EGI service registry. It contains the services endpoints ([https://goc.egi.eu/portal/index.php?Page_Type=View_Object&object_id=23483&grid_id=0 example]), the grid site topology and other information like downtimes register or contact lists. GOCDB does not contain dynamic information such as number of cores available or resources capacity.<br />
<br />
The plans for the inclusion in GOCDB may be the following:<br />
# Definition of the new service types:<br />
## Resource providers service types:<br />
##* ''org.ogf.OCCI'': service exposing OCCI interface<br />
##* ''org.snia.CDMI'': service exposing CDMI interface <br />
##* ''org.opennebula.OCA'': open nebula management interface<br />
##* ''eu.egi.cloud-site-bdii'': cloud site information provider<br />
##* ''eu.egi.cloud-accounting'': Accounting data parser<br />
##* ..more?<br />
## Infrastructure services<br />
##* ''org.stratuslab.marketplace'': StratusLab marketplace<br />
##* .. more?<br />
# Registration of the cloud resource centres<br />
## As non-EGI for the first iteration (the flag can be changed easily)<br />
## Fill the resource provider contacts (site manager, security officer..)<br />
## Add the service endpoints to the site<br />
## The GIIS URL must be in the format: ''ldap://<site bdii url>:2170/o=glue''</div>
Patrykl