https://wiki.egi.eu/w/api.php?action=feedcontributions&user=Verlato&feedformat=atom
EGIWiki - User contributions [en]
2024-03-29T08:11:24Z
User contributions
MediaWiki 1.37.1
https://wiki.egi.eu/w/index.php?title=Federated_Cloud_infrastructure_status&diff=101013
Federated Cloud infrastructure status
2019-02-04T10:51:45Z
<p>Verlato: /* Status of the Federated Cloud */</p>
<hr />
<div>{{Fedcloud_Menu}} {{TOC_right}} <br />
<br />
The purposes of this page are <br />
<br />
*providing a snapshot of the resources that are provided by the Federated Cloud infrastructure <br />
*providing information about the sites that are joining, or have expressed interest in joining the FedCloud <br />
*providing the list of sites supporting the fedcloud.egi.eu VO, which is the VO used to allow the evaluation of the FedCloud infrastructure by a given provider<br />
<br />
If you have any comments on the content of this page, please contact '''operations @ egi.eu'''. <br />
<br />
== CMF support highlights ==<br />
<br />
'''Last update: April 2018''' <br />
<br />
{| cellspacing="1" cellpadding="1" width="1553" border="1"<br />
|-<br />
! scope="col" | CMF <br />
! scope="col" | Version <br />
! scope="col" | Comments<br />
|-<br />
| OpenStack <br />
| bgcolor="#66ff99" align="center" | &gt;=Ocata <br />
| here https://releases.openstack.org/ you find that Ocata and above are "Maintained", meaning "Approximately 18 months" supported since the release date. For instance, for Ocata, released on 2017-02-22, this means approximately until August 2018<br />
|-<br />
| OpenStack <br />
| bgcolor="#66ff99" align="center" | Mitaka/Ubuntu LTS <br />
| some RCs are also using Mitaka from Ubuntu LTS, which is supported for 5 years. Due to project constraints, some of these could decide to switch to a new version, or stay with the current Mitaka/Ubuntu set up as planned last year; so '''we will ask about plans once again, to confirm they will stay with Mitaka or otherwise'''<br />
|-<br />
| OpenStack <br />
| bgcolor="#ff6666" align="center" | &lt;Ocata other than Mitaka/Ubuntu LTS <br />
| there are situations where an outdated version of OpenStack is used. '''These RCs are violating the [https://documents.egi.eu/public/ShowDocument?docid=669 Service Operations Security Policy] which states that resource centres SHOULD NOT run unsupported software in their production infrastructure. We will open tickets against those sites to plan an upgrade'''<br />
|-<br />
| OpenNebula <br />
| bgcolor="#ff6666" align="center" | 4 <br />
| no longer supported, OpenNebula RCs need to update to OpenNebula 5. '''These RCs are violating the the [https://documents.egi.eu/public/ShowDocument?docid=669 Service Operations Security Policy] which states that resource centres SHOULD NOT run unsupported software in their production infrastructure. We will open tickets against those sites to plan an upgrade'''<br />
|-<br />
| OpenNebula <br />
| bgcolor="#66ff99" align="center" | 5 <br />
| Supported<br />
|-<br />
| Synnefo <br />
| bgcolor="#66ff99" align="center" | <br> <br />
| Supported<br />
|}<br />
<br />
== Status of the Federated Cloud ==<br />
<br />
The table here shows all Resource Centres fully integrated into the Federated Cloud infrastructure and certified through the [[PROC09|EGI Resource Centre Registration and Certification]]. <br />
<br />
The status of all the services is monitored via the [https://argo-mon.egi.eu/nagios/ argo-mon.egi.eu nagios instance] and [https://argo-mon2.egi.eu/nagios/ argo-mon2.egi.eu nagios instance]. <br />
<br />
Details on Resource Centres availability and reliability are available on [http://argo.egi.eu/lavoisier/cloud_reports?accept=html ARGO]. <br />
<br />
Accounting data are available on the [http://accounting.egi.eu/cloud.php EGI Accounting Portal] or in the [http://accounting-devel.egi.eu/cloud.php accounting portal dev instance]. <br />
<br />
'''Last update: June 2018 http://go.egi.eu/fedcloudstatus''' <br />
<br />
{| style="border:1px solid black; text-align:left;" class="wikitable sortable" cellspacing="0" cellpadding="5"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | Affiliation <br />
! style="border-bottom:1px solid black;" | CC <br />
! style="border-bottom:1px solid black;" | Main contact <br />
! style="border-bottom:1px solid black;" | Deputies <br />
! style="border-bottom:1px solid black;" | Host Site <br />
! style="border-bottom:1px solid black;" | Production status <br />
! style="border-bottom:1px solid black;" | CMF Version <br />
! style="border-bottom:1px solid black;" | CMF Upgrade plans <br />
! style="border-bottom:1px solid black;" | Using CMD <br />
! style="border-bottom:1px solid black;" | KVM/XEN? <br />
! style="border-bottom:1px solid black;" | Supporting fedcloud.egi.eu <br />
! style="border-bottom:1px solid black;" | Resource Endpoints <br />
! style="border-bottom:1px solid black;" | Committed Production Launch resources <br />
! style="border-bottom:1px solid black;" | VM maximum size (memory, cpu, storage) <br />
! style="border-bottom:1px solid black;" | IPv6 readiness of the network at the site <br />
! style="border-bottom:1px solid black;" | Pay-per-use <br />
! style="border-bottom:1px solid black;" | Last update timestamp YYYY-MM-DD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
100 Percent IT Ltd <br />
<br />
| style="border-bottom:1px dotted silver;" | UK <br />
| style="border-bottom:1px dotted silver;" | David Blundell <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | 100IT <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka&nbsp; <br />
| style="border-bottom:1px dotted silver;" | Queens on Ubuntu18.04 <br />
| style="border-bottom:1px dotted silver;" | Planned with Queens release <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://occi-api.100percentit.com:8787/occi1.1 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 120 Cores with 128 GB RAM <br />
<br />
- 16TB Shared storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 cores, 32 GB of RAM, up to 1TB attachable block storage<br> <br />
| style="border-bottom:1px dotted silver;" | Network ready, integration planned for Queens release <br />
| style="border-bottom:1px dotted silver;" | Yes <br> [https://wiki.egi.eu/wiki/Pay-for-use Price] <br />
| style="border-bottom:1px dotted silver;" | 2017-12-04<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Cyfronet (NGI PL) <br />
| style="border-bottom:1px dotted silver;" | PL <br />
| style="border-bottom:1px dotted silver;" | Tomasz Szepieniec <br />
| style="border-bottom:1px dotted silver;" | Patryk Lasoń, Łukasz Flis <br />
| style="border-bottom:1px dotted silver;" | [https://next.gocdb.eu/portal/index.php?Page_Type=Site&id=966 CYFRONET-CLOUD] <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Juno <br />
| style="border-bottom:1px dotted silver;" | Queens in August 2018 <br />
| style="border-bottom:1px dotted silver;" | Yes <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://control.cloud.cyfronet.pl:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 200 Cores with 400 GB RAM <br />
<br />
- 5 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 cores, 16 GB of RAM, 10 GB <br />
| style="border-bottom:1px dotted silver;" | Not ready (planned) <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-07-11<br />
|-<br />
| style="border-bottom:1px dotted silver;" | FCTSG <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Rubén Díez <br />
| style="border-bottom:1px dotted silver;" | Iván Díaz <br />
| style="border-bottom:1px dotted silver;" | CESGA <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5.4.6 <br />
| style="border-bottom:1px dotted silver;" | No in near future. Seems that is the newer version compatible with CMD.<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
Yes (CMD-ONE-1) <br />
<br />
Except occi-server (2.0.4) and early adopter of cloudkeeper/cloudkeeper-one (1.6.0/1.3.0)<br />
<br />
| style="border-bottom:1px dotted silver;" | KVM<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://cloud.cesga.es:3202/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- <span class="right" id="dashboard_host_allocated_cpu_str">192 Cores total<br />
</span> <br />
<br />
<span class="right">- </span><span class="right" id="dashboard_host_allocated_mem_str">340 GB of RAM total<br />
</span><br> <br />
<br />
<span class="right">- </span>Two data stores of 3TB and 700GB. <br />
<br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 cores, 16GB of RAM, local HD 100 GB <br />
| style="border-bottom:1px dotted silver;" | &nbsp; Ready (network level) <br />
| style="border-bottom:1px dotted silver;" | Yes <br> [https://wiki.egi.eu/wiki/Pay-for-use Price] <br />
| style="border-bottom:1px dotted silver;" | 2018-04-23<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CESNET <br />
| style="border-bottom:1px dotted silver;" | CZ <br />
| style="border-bottom:1px dotted silver;" | Miroslav Ruda <br />
| style="border-bottom:1px dotted silver;" | Boris Parak <br />
| style="border-bottom:1px dotted silver;" | CESNET-MetaCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4.14.2 <br />
| style="border-bottom:1px dotted silver;" | Migration to OpenStack Queens/Rocky with OpenID Connect authN/authZ. ETA Q1/2019. <br />
| style="border-bottom:1px dotted silver;" | NO <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://carach5.ics.muni.cz:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 416 Cores with 2.4 TB RAM <br />
<br />
- 56.6 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 32 cores, 185 GB of RAM, approx. 3 TB of attached storage <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-05-31<br />
|-<br />
| style="border-bottom:1px dotted silver;" | GWDG <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Piotr Kasprzak <br />
| style="border-bottom:1px dotted silver;" | Ramin Yahyapour, Philipp Wieder <br />
| style="border-bottom:1px dotted silver;" | GoeGrid <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka<br />
| style="border-bottom:1px dotted silver;" | Migration to newer versions of OpenStack planned / under way<br />
| style="border-bottom:1px dotted silver;" | NO<br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://api.prod.cloud.gwdg.de:5000/v3 Openstack API]<br>[https://cloud.gwdg.de/horizon Horizon dashboard with EGI AAI integration via OIDC]<br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 352 Cores with 1408 GB RAM <br />
<br />
&nbsp;- 50 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 32 cores, 64 GB RAM, 3 TB disk <br />
| style="border-bottom:1px dotted silver;" | Not ready (planned) <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-10-26<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CSIC/IFCA <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Alvaro Lopez Garcia <br />
| style="border-bottom:1px dotted silver;" | Pablo Orviz <br />
| style="border-bottom:1px dotted silver;" | IFCA-LCG2 <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Newton/Ocata/Pike/Queens <br />
| style="border-bottom:1px dotted silver;" | Upgrade to Ubuntu 18.04 and Pike during Q3 2018<br> <br />
| style="border-bottom:1px dotted silver;" | No<br> <br />
| style="border-bottom:1px dotted silver;" | KVM<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | <br />
[https://cloud.ifca.es:8787/ OCCI] <br />
<br />
[https://cloud.ifca.es:8774/ OpenStack] <br />
<br />
[https://cephrgw.ifca.es:8080/ Swift] <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
- 7744 Cores: <br />
<br />
&nbsp;&nbsp;&nbsp; - 32 nodes x 8 vcpus x 16GB RAM <br />
<br />
&nbsp;&nbsp;&nbsp; - 36 nodes x 24 vcpus x 48GM RAM <br />
<br />
&nbsp;&nbsp;&nbsp; - 34 nodes x 32 vcpus x 128GB RAM <br />
<br />
&nbsp;&nbsp;&nbsp; - 2 nodes x 80 vcpus x 1TB RAM<br> <br />
<br />
&nbsp;&nbsp;&nbsp; - 24 nodes x 32 vcpus x 32GB RAM <br />
<br />
&nbsp;&nbsp;&nbsp; - 144 nodes x 32 vcpus x 32GB RAM <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
cm4.4xlarge: 32 vCPUs, 160GB HD, 64GB RAM <br />
<br />
x1.20xlarge: 80 vCPUS, 100GB HD, 1TB RAM (upon request) <br />
<br />
GPU and Infiniband access (upon request)<br />
<br />
| style="border-bottom:1px dotted silver;" | Not ready (planned) <br />
| style="border-bottom:1px dotted silver;" | Yes <br> [https://wiki.egi.eu/wiki/Pay-for-use Price] <br />
| style="border-bottom:1px dotted silver;" | 2018-06-18<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
GRNET<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | GR <br />
| style="border-bottom:1px dotted silver;" | Kostas Koumantaros <br />
| style="border-bottom:1px dotted silver;" | &nbsp; Kyriakos Ginis <br />
| style="border-bottom:1px dotted silver;" | HG-09-Okeanos-Cloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | Synnefo <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://okeanos-occi2.hellasgrid.gr:9000/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 70 CPUs with 220 GB RAM <br />
<br />
<br> - 2 ΤΒ storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 24 Cores, 192GB RAM, 500GB HD&nbsp; <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Miroslav Dobrucky, Jan Astalos <br />
| style="border-bottom:1px dotted silver;" | IISAS-FedCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | Queens on Ubuntu18.04 <br />
| style="border-bottom:1px dotted silver;" | Planned with Queens release <br />
| style="border-bottom:1px dotted silver;" | KVM and LXD <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nova.ui.savba.sk:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 168 Cores with 1.5 GB RAM per core <br />
<br />
&nbsp;- 9 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.xlarge: 8 VCPUs, 16GB RAM, 160GB HD <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2017-11-30<br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Jan Astalos, Miroslav Dobrucky <br />
| style="border-bottom:1px dotted silver;" | IISAS-GPUCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | Queens on Ubuntu18.04 <br />
| style="border-bottom:1px dotted silver;" | Planned with Queens release <br />
| style="border-bottom:1px dotted silver;" | KVM and LXD <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nova3.ui.savba.sk:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 48 Cores with 4GB RAM per core, 6 GPUs K20m <br />
<br />
&nbsp;- 6 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <span style="color: rgb(51, 51, 51); font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 13px; line-height: 18.5714px; background-color: rgb(245, 245, 245);">gpu2cpu12</span><span style="font-size: 13.28px; line-height: 1.5em;">: 12 VCPUs, 48GB RAM, 200GB HD, 2 GPU Tesla K20m</span> <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | Yes <br> [https://wiki.egi.eu/wiki/Pay-for-use Price] <br />
| style="border-bottom:1px dotted silver;" | 2017-11-30<br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - Catania <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Roberto Barbera <br />
| style="border-bottom:1px dotted silver;" | Giuseppe Platania <br />
| style="border-bottom:1px dotted silver;" | INFN-CATANIA-STACK <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://stack-server-01.ct.infn.it:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 16 Cores with 65 GB RAM - 16 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | VM maximum size = m1.xlarge (16384 MB, 8 VCPU, 160 GB) <br />
| Ready <br />
| No<br />
| 2018-06-25<br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - Bari <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Marica Antonacci <br />
| style="border-bottom:1px dotted silver;" | Giacinto Donvito, Stefano Nicotri, Vincenzo Spinoso <br />
| style="border-bottom:1px dotted silver;" | RECAS-BARI (was:PRISMA-INFN-BARI) <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | Queens on Ubuntu 18.04. Planned but not scheduled yet <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [http://cloud.recas.ba.infn.it:8787/occi OCCI]<br>[http://cloud.recas.ba.infn.it:8774/v2.1/%(tenant_id)s Openstack Compute]<br>[http://cloud.recas.ba.infn.it:8080/v1/AUTH_%(tenant_id)s Openstack Object-Store]<br>[http://egi-cloud.recas.ba.infn.it/ Horizon dashboard with auth token] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 300 Cores with 600GB of RAM <br />
<br />
- 50 TB Storage. <br />
<br />
| style="border-bottom:1px dotted silver;" | Flavor "m1.xxlarge": 24 cores, 48GB RAM, 100 GB disk. Up to 500GB block storage can be attached on-demand. <br />
| style="border-bottom:1px dotted silver;" | Not ready yet but planned <br />
| style="border-bottom:1px dotted silver;" | Yes <br> [https://wiki.egi.eu/wiki/Pay-for-use Price] <br />
| style="border-bottom:1px dotted silver;" | 2018-07-04<br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN-Padova <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Marco Verlato <br />
| style="border-bottom:1px dotted silver;" | Federica Fanzago<br />
| style="border-bottom:1px dotted silver;" | [https://goc.egi.eu/portal/index.php?Page_Type=Site&id=1024 INFN-PADOVA-STACK] <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Rocky <br />
| style="border-bottom:1px dotted silver;" | Rocky is the current release.<br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | KVM<br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://egi-cloud.pd.infn.it/v3/ OpenStack Auth URL]<br>[https://egi-cloud.pd.infn.it:8443/dashboard OpenStack Horizon dashboard with EGI Check-in AAI]<br />
| style="border-bottom:1px dotted silver;" | <br />
- 184 Cores with 384 GB RAM <br />
<br />
- 2.2TB of overall block storage and 1.8TB of ephemeral storage per compute node <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
m1.hpc: 40 VCPUs, 90GB RAM, 200GB HD, up to 1TB attachable block storage <br />
<br />
Up to 24 public IPs <br />
<br />
| style="border-bottom:1px dotted silver;" | Not ready (planned) <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2019-02-04<br />
|-<br />
| style="border-bottom:1px dotted silver;" | TUBITAK ULAKBIM <br />
| style="border-bottom:1px dotted silver;" | TR <br />
| style="border-bottom:1px dotted silver;" | Feyza Eryol <br />
| style="border-bottom:1px dotted silver;" | Onur Temizsoylu <br />
| style="border-bottom:1px dotted silver;" | TR-FC1-ULAKBIM <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | NO<br> <br />
| style="border-bottom:1px dotted silver;" | KVM<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [http://fcctrl.ulakbim.gov.tr:8787/occi1.2 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 744 Cores with 3968 GB RAM&nbsp; <br />
<br />
- 40 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 4 CPUs, 16GB Memory, 40GB disk<br> <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | Yes <br> [https://wiki.egi.eu/wiki/Pay-for-use Price] <br />
| style="border-bottom:1px dotted silver;" | 2018-06-20<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | IPHC <br />
| style="border-bottom:1px dotted silver;" | FR <br />
| style="border-bottom:1px dotted silver;" | Jerome Pansanel <br />
| style="border-bottom:1px dotted silver;" | Sebastien Geiger <br />
| style="border-bottom:1px dotted silver;" | IN2P3-IRES <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Pike<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://sbgcloud.in2p3.fr:8787 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 480 Cores with 2336 GB RAM - 480 TB storage (Cinder / CEPH) <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.12xlarge-hugemem (CPU: 48, RAM: 512 GB, disk: 160 GB) <br> Monitoring&nbsp;: <br />
*https://cloudmon.egi.eu/nagios/cgi-bin/extinfo.cgi?type=1&amp;host=sbgbdii1.in2p3.fr <br />
*https://cloudmon.egi.eu/nagios/cgi-bin/extinfo.cgi?type=1&amp;host=sbgcloud.in2p3.fr<br />
<br />
| style="border-bottom:1px dotted silver;" | Planned for October 2018 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-05-31<br />
|-<br />
| style="border-bottom:1px dotted silver;" | NCG-INGRID-PT / LIP <br />
| style="border-bottom:1px dotted silver;" | PT <br />
| style="border-bottom:1px dotted silver;" | Mario David <br />
| style="border-bottom:1px dotted silver;" | Joao Pina, Joao Martins <br />
| style="border-bottom:1px dotted silver;" | NCG-INGRID-PT <br />
| style="border-bottom:1px dotted silver;" | <br />
Candidate <br />
<br />
May 2018 <br />
<br />
| style="border-bottom:1px dotted silver;" | Openstack Mitaka<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
Queens or "R", Ubuntu18.04 <br />
<br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | KVM and lxd[[]] <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [http://nimbus.ncg.ingrid.pt:8774/v2.1/%(tenant_id)s Openstack Compute]<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
- 80 Cores with 192 GB RAM <br />
<br />
- 3 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | Name m1.xlarge, VCPUs 8, Root Disk 160 GB, Ephemeral Disk 0 GB, Total Disk 160 GB, RAM 16,384 MB <br />
| style="border-bottom:1px dotted silver;" | Planned for second half of 2018 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 31.05.2018<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Polythecnic University of Valencia, I3M <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Ignacio Blanquer <br />
| style="border-bottom:1px dotted silver;" | Carlos de Alfonso <br />
| style="border-bottom:1px dotted silver;" | UPV-GRyCAP <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5 <br />
| style="border-bottom:1px dotted silver;" | Not planned yet. <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | NO <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://fc-one.i3m.upv.es:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 128 Cores with 192 GB RAM <br />
<br />
- 5 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 4 VCPUS, 16GB RAM, 160GB HD <br />
| style="border-bottom:1px dotted silver;" | By 2018 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-05-31<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Fraunhofer SCAI <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Horst Schwichtenberg <br />
| style="border-bottom:1px dotted silver;" | Andre Gemuend <br />
| style="border-bottom:1px dotted silver;" | SCAI <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | Upgrade to OpenStack Pike or Queens. Estimated upgrade Q4 2018.<br> <br />
| style="border-bottom:1px dotted silver;" | YES<br> <br />
| style="border-bottom:1px dotted silver;" | KVM<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://fc.scai.fraunhofer.de:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 128 physical cores + HT, 244 GB RAM <br />
<br />
- 20 TB Storage (Glance &amp; Cinder) <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 VCPUS, 16GB RAM, 160GB HD <br />
| style="border-bottom:1px dotted silver;" | Planned for Q4 2018 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 08.06.2018<br />
|-<br />
| style="border-bottom:1px dotted silver;" | BELSPO <br />
| style="border-bottom:1px dotted silver;" | BE<br> <br />
| style="border-bottom:1px dotted silver;" | Stephane GERARD <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | BEgrid-BELNET <br />
| style="border-bottom:1px dotted silver;" | Certified (2016-12) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5.4 <br />
| style="border-bottom:1px dotted silver;" | No OpenNebula upgrade planned. We seriously think about moving to OpenStack, but not for this year.<br> <br />
| style="border-bottom:1px dotted silver;" | Yes<br> <br />
| style="border-bottom:1px dotted silver;" | KVM<br> <br />
| style="border-bottom:1px dotted silver;" | YES<br> <br />
| style="border-bottom:1px dotted silver;" | [https://rocci.iihe.ac.be:11443/ OCCI]<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
- 160 physical cores + HT, 512 GB RAM <br />
<br />
- 10 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 16 VCPUs, 32GB of RAM, local HD 200 GB <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2017-12-11<br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Jan Astalos, Miroslav Dobrucky <br />
| style="border-bottom:1px dotted silver;" | IISAS-Nebula <br />
| style="border-bottom:1px dotted silver;" | Certified (2017-01-25) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5 <br />
| style="border-bottom:1px dotted silver;" | No plans.<br> <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | CMD-ONE-1 except occi-server (occi-server version installed is 2.0.0.alpha.1 which is needed for GPU support) <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | [https://nebula2.ui.savba.sk:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | 32 Cores with 4GB RAM per core and 4 GPUs K20m - Storage 147GB <br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 14 CPU cores, 2GPU, 56GB of RAM, 830GB HD <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2017-11-30<br />
|-<br />
| style="border-bottom:1px dotted silver;" | IFIN-HH <br />
| style="border-bottom:1px dotted silver;" | RO <br />
| style="border-bottom:1px dotted silver;" | Ionut Vasile <br />
| style="border-bottom:1px dotted silver;" | Dragos Ciobanu-Zabet <br />
| style="border-bottom:1px dotted silver;" | CLOUDIFIN <br />
| style="border-bottom:1px dotted silver;" | Certified (2017-03-01) <br />
| style="border-bottom:1px dotted silver;" | OpenStack Pike <br />
| style="border-bottom:1px dotted silver;" | Upgrade to OpenStack Queens or Rocky, whichever is supported at the time of the upgrade. Estimated upgrade period Q2 2019. <br />
| style="border-bottom:1px dotted silver;" | yes <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | Yes<br> <br />
| style="border-bottom:1px dotted silver;" | [http://cloud-ctrl.nipne.ro:8787/occi1.1 OCCI] <br />
| style="border-bottom:1px dotted silver;" | 96 Cores with 384 GB RAM <br>- 2TB Storage <br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.large(VCPUs 4, 8GB RAM, Root Disk 80GB)<br> m1.medium(VCPUs 2, 4GB RAM, Root Disk 40GB)<br> m1.nano(VCPUs 1, 64MB RAM)<br> m1.small(VCPUs 1, 2GB RAM, Root Disk 20GB)<br> m1.tiny(VCPUs 1, 512MB RAM, Root Disk 1GB)<br> m1.xlarge(VCPUs 8, 16GB RAM, Root Disk 160GB) <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-09-20<br />
|-<br />
| style="border-bottom:1px dotted silver;" | BITP <br />
| style="border-bottom:1px dotted silver;" | UA <br />
| style="border-bottom:1px dotted silver;" | Volodymyr Yurchenko <br />
| style="border-bottom:1px dotted silver;" | Volodymyr Yurchenko <br />
| style="border-bottom:1px dotted silver;" | UA-BITP <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Newton <br />
| style="border-bottom:1px dotted silver;" | Upgrade to OpenStack Queens during July-August 2018 (not possible until now due to quite intensive resource usage); a downtime will be needed <br />
| style="border-bottom:1px dotted silver;" | Yes<br> <br />
| style="border-bottom:1px dotted silver;" | KVM<br> <br />
| style="border-bottom:1px dotted silver;" | Yes<br> <br />
| style="border-bottom:1px dotted silver;" | [http://cloud-main.bitp.kiev.ua:8787 OCCI] <br> [https://cloud-main.bitp.kiev.ua:5001/v2.0 OpenStack] <br />
| style="border-bottom:1px dotted silver;" | Total 56 virtual cores (28 physical CPUs), 62 GB RAM, 2.9 TB storage, <br />
all available for fedcloud.egi.eu through an SLA <br> <br />
<br />
| style="border-bottom:1px dotted silver;" | Max size of VM: 8 cores, 16 GB RAM, 160 GB storage <br />
| style="border-bottom:1px dotted silver;" | Planned for July-August 2018 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-06-18-12-11<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | INAF-TRIESTE-STACK <br />
| style="border-bottom:1px dotted silver;" | Candidate (13-11-2017) <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Yes/No <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|}<br />
<br />
== Suspended or closed sites ==<br />
<br />
'''Last update: November 2017''' <br />
<br />
{| cellspacing="0" cellpadding="5" style="border:1px solid black; text-align:left;" class="wikitable sortable"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | Affiliation <br />
! style="border-bottom:1px solid black;" | CC <br />
! style="border-bottom:1px solid black;" | Main contact <br />
! style="border-bottom:1px solid black;" | Deputies <br />
! style="border-bottom:1px solid black;" | Host Site <br />
! style="border-bottom:1px solid black;" | Production status <br />
! style="border-bottom:1px solid black;" | CMF Version <br />
! style="border-bottom:1px solid black;" | Supporting fedcloud.egi.eu <br />
! style="border-bottom:1px solid black;" | Resource Endpoints <br />
! style="border-bottom:1px solid black;" | Committed Production Launch resources <br />
! style="border-bottom:1px solid black;" | VM maximum size (memory, cpu, storage) <br />
! style="border-bottom:1px solid black;" | IPv6 readiness of the network at the site <br />
! style="border-bottom:1px solid black;" | Last update timestamp YYY-MM-DD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
UNIZAR / BIFI <br />
<br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Ruben Valles <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | BIFI <br />
| style="border-bottom:1px dotted silver;" | Suspended (2017-08-01) <br />
| style="border-bottom:1px dotted silver;" | OpenStack (Newton / Icehouse) <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | <br />
[http://server4-ciencias.bifi.unizar.es:8787 OCCI] (Newton)<br> [http://server4-eupt.unizar.es:8787 OCCI] (Icehouse) <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
- 720 Cores with 740 GB RAM <br />
<br />
- 10 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
'''Newton''': m1.xlarge, VCPUs 8, Root Disk 20 GB, Ephemeral Disk 0 GB, Total Disk 20 GB, RAM 16,384 MB '''Icehouse''': Name m1.xlarge, VCPUs 8, Root Disk 160 GB, Ephemeral Disk 0 GB, Total Disk 160 GB, RAM 16,384 MB <br />
<br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" rowspan="3" | INFN - Catania <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | IT <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | Roberto Barbera <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | Giuseppe La Rocca, Diego Scardac, Giuseppe Platania <br />
| style="border-bottom:1px dotted silver;" | INFN-CATANIA-NEBULA <br />
| style="border-bottom:1px dotted silver;" | Suspended (2017-04-27) https://ggus.eu/index.php?mode=ticket_info&amp;ticket_id=126716 <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nebula-server-01.ct.infn.it:9000/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 16 cores, 64 GB RAM <br />
<br />
- 5.4 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | INDIGO-CATANIA-STACK <br />
| style="border-bottom:1px dotted silver;" | Closed (2017-05-02) <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | CETA-CIEMAT <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Guillermo Díaz Herrero, <br />
| style="border-bottom:1px dotted silver;" | <br />
Alfonso Pardo Diaz <br />
<br />
Abel Francisco Paz<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | CETA-GRID <br />
| style="border-bottom:1px dotted silver;" | Cloud resources dismissed (2017-06-09) <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | FZ Jülich <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Björn Hagemeier <br />
| style="border-bottom:1px dotted silver;" | Shahbaz Memon <br />
| style="border-bottom:1px dotted silver;" | FZJ <br />
| style="border-bottom:1px dotted silver;" | Cloud resources dismissed (Dec 2017)<br />
|-<br />
| style="border-bottom:1px dotted silver;" | UKIM <br />
| style="border-bottom:1px dotted silver;" | MK <br />
| style="border-bottom:1px dotted silver;" | Boro Jakimovski <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | MK-04-FINKICLOUD <br />
| style="border-bottom:1px dotted silver;" | Suspended (2017-10-12) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5.2 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nebula.finki.ukim.mk:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 100 Cores with 24 GB RAM - 1 TB Storage <br />
<br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 4 CPUs, 8GB Memory, 200GB disk<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|}<br />
<br />
[[Category:Federated_Cloud]]</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Federated_Cloud_infrastructure_status&diff=101012
Federated Cloud infrastructure status
2019-02-04T10:49:50Z
<p>Verlato: /* Status of the Federated Cloud */</p>
<hr />
<div>{{Fedcloud_Menu}} {{TOC_right}} <br />
<br />
The purposes of this page are <br />
<br />
*providing a snapshot of the resources that are provided by the Federated Cloud infrastructure <br />
*providing information about the sites that are joining, or have expressed interest in joining the FedCloud <br />
*providing the list of sites supporting the fedcloud.egi.eu VO, which is the VO used to allow the evaluation of the FedCloud infrastructure by a given provider<br />
<br />
If you have any comments on the content of this page, please contact '''operations @ egi.eu'''. <br />
<br />
== CMF support highlights ==<br />
<br />
'''Last update: April 2018''' <br />
<br />
{| cellspacing="1" cellpadding="1" width="1553" border="1"<br />
|-<br />
! scope="col" | CMF <br />
! scope="col" | Version <br />
! scope="col" | Comments<br />
|-<br />
| OpenStack <br />
| bgcolor="#66ff99" align="center" | &gt;=Ocata <br />
| here https://releases.openstack.org/ you find that Ocata and above are "Maintained", meaning "Approximately 18 months" supported since the release date. For instance, for Ocata, released on 2017-02-22, this means approximately until August 2018<br />
|-<br />
| OpenStack <br />
| bgcolor="#66ff99" align="center" | Mitaka/Ubuntu LTS <br />
| some RCs are also using Mitaka from Ubuntu LTS, which is supported for 5 years. Due to project constraints, some of these could decide to switch to a new version, or stay with the current Mitaka/Ubuntu set up as planned last year; so '''we will ask about plans once again, to confirm they will stay with Mitaka or otherwise'''<br />
|-<br />
| OpenStack <br />
| bgcolor="#ff6666" align="center" | &lt;Ocata other than Mitaka/Ubuntu LTS <br />
| there are situations where an outdated version of OpenStack is used. '''These RCs are violating the [https://documents.egi.eu/public/ShowDocument?docid=669 Service Operations Security Policy] which states that resource centres SHOULD NOT run unsupported software in their production infrastructure. We will open tickets against those sites to plan an upgrade'''<br />
|-<br />
| OpenNebula <br />
| bgcolor="#ff6666" align="center" | 4 <br />
| no longer supported, OpenNebula RCs need to update to OpenNebula 5. '''These RCs are violating the the [https://documents.egi.eu/public/ShowDocument?docid=669 Service Operations Security Policy] which states that resource centres SHOULD NOT run unsupported software in their production infrastructure. We will open tickets against those sites to plan an upgrade'''<br />
|-<br />
| OpenNebula <br />
| bgcolor="#66ff99" align="center" | 5 <br />
| Supported<br />
|-<br />
| Synnefo <br />
| bgcolor="#66ff99" align="center" | <br> <br />
| Supported<br />
|}<br />
<br />
== Status of the Federated Cloud ==<br />
<br />
The table here shows all Resource Centres fully integrated into the Federated Cloud infrastructure and certified through the [[PROC09|EGI Resource Centre Registration and Certification]]. <br />
<br />
The status of all the services is monitored via the [https://argo-mon.egi.eu/nagios/ argo-mon.egi.eu nagios instance] and [https://argo-mon2.egi.eu/nagios/ argo-mon2.egi.eu nagios instance]. <br />
<br />
Details on Resource Centres availability and reliability are available on [http://argo.egi.eu/lavoisier/cloud_reports?accept=html ARGO]. <br />
<br />
Accounting data are available on the [http://accounting.egi.eu/cloud.php EGI Accounting Portal] or in the [http://accounting-devel.egi.eu/cloud.php accounting portal dev instance]. <br />
<br />
'''Last update: June 2018 http://go.egi.eu/fedcloudstatus''' <br />
<br />
{| style="border:1px solid black; text-align:left;" class="wikitable sortable" cellspacing="0" cellpadding="5"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | Affiliation <br />
! style="border-bottom:1px solid black;" | CC <br />
! style="border-bottom:1px solid black;" | Main contact <br />
! style="border-bottom:1px solid black;" | Deputies <br />
! style="border-bottom:1px solid black;" | Host Site <br />
! style="border-bottom:1px solid black;" | Production status <br />
! style="border-bottom:1px solid black;" | CMF Version <br />
! style="border-bottom:1px solid black;" | CMF Upgrade plans <br />
! style="border-bottom:1px solid black;" | Using CMD <br />
! style="border-bottom:1px solid black;" | KVM/XEN? <br />
! style="border-bottom:1px solid black;" | Supporting fedcloud.egi.eu <br />
! style="border-bottom:1px solid black;" | Resource Endpoints <br />
! style="border-bottom:1px solid black;" | Committed Production Launch resources <br />
! style="border-bottom:1px solid black;" | VM maximum size (memory, cpu, storage) <br />
! style="border-bottom:1px solid black;" | IPv6 readiness of the network at the site <br />
! style="border-bottom:1px solid black;" | Pay-per-use <br />
! style="border-bottom:1px solid black;" | Last update timestamp YYYY-MM-DD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
100 Percent IT Ltd <br />
<br />
| style="border-bottom:1px dotted silver;" | UK <br />
| style="border-bottom:1px dotted silver;" | David Blundell <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | 100IT <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka&nbsp; <br />
| style="border-bottom:1px dotted silver;" | Queens on Ubuntu18.04 <br />
| style="border-bottom:1px dotted silver;" | Planned with Queens release <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://occi-api.100percentit.com:8787/occi1.1 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 120 Cores with 128 GB RAM <br />
<br />
- 16TB Shared storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 cores, 32 GB of RAM, up to 1TB attachable block storage<br> <br />
| style="border-bottom:1px dotted silver;" | Network ready, integration planned for Queens release <br />
| style="border-bottom:1px dotted silver;" | Yes <br> [https://wiki.egi.eu/wiki/Pay-for-use Price] <br />
| style="border-bottom:1px dotted silver;" | 2017-12-04<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Cyfronet (NGI PL) <br />
| style="border-bottom:1px dotted silver;" | PL <br />
| style="border-bottom:1px dotted silver;" | Tomasz Szepieniec <br />
| style="border-bottom:1px dotted silver;" | Patryk Lasoń, Łukasz Flis <br />
| style="border-bottom:1px dotted silver;" | [https://next.gocdb.eu/portal/index.php?Page_Type=Site&id=966 CYFRONET-CLOUD] <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Juno <br />
| style="border-bottom:1px dotted silver;" | Queens in August 2018 <br />
| style="border-bottom:1px dotted silver;" | Yes <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://control.cloud.cyfronet.pl:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 200 Cores with 400 GB RAM <br />
<br />
- 5 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 cores, 16 GB of RAM, 10 GB <br />
| style="border-bottom:1px dotted silver;" | Not ready (planned) <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-07-11<br />
|-<br />
| style="border-bottom:1px dotted silver;" | FCTSG <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Rubén Díez <br />
| style="border-bottom:1px dotted silver;" | Iván Díaz <br />
| style="border-bottom:1px dotted silver;" | CESGA <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5.4.6 <br />
| style="border-bottom:1px dotted silver;" | No in near future. Seems that is the newer version compatible with CMD.<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
Yes (CMD-ONE-1) <br />
<br />
Except occi-server (2.0.4) and early adopter of cloudkeeper/cloudkeeper-one (1.6.0/1.3.0)<br />
<br />
| style="border-bottom:1px dotted silver;" | KVM<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://cloud.cesga.es:3202/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- <span class="right" id="dashboard_host_allocated_cpu_str">192 Cores total<br />
</span> <br />
<br />
<span class="right">- </span><span class="right" id="dashboard_host_allocated_mem_str">340 GB of RAM total<br />
</span><br> <br />
<br />
<span class="right">- </span>Two data stores of 3TB and 700GB. <br />
<br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 cores, 16GB of RAM, local HD 100 GB <br />
| style="border-bottom:1px dotted silver;" | &nbsp; Ready (network level) <br />
| style="border-bottom:1px dotted silver;" | Yes <br> [https://wiki.egi.eu/wiki/Pay-for-use Price] <br />
| style="border-bottom:1px dotted silver;" | 2018-04-23<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CESNET <br />
| style="border-bottom:1px dotted silver;" | CZ <br />
| style="border-bottom:1px dotted silver;" | Miroslav Ruda <br />
| style="border-bottom:1px dotted silver;" | Boris Parak <br />
| style="border-bottom:1px dotted silver;" | CESNET-MetaCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4.14.2 <br />
| style="border-bottom:1px dotted silver;" | Migration to OpenStack Queens/Rocky with OpenID Connect authN/authZ. ETA Q1/2019. <br />
| style="border-bottom:1px dotted silver;" | NO <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://carach5.ics.muni.cz:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 416 Cores with 2.4 TB RAM <br />
<br />
- 56.6 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 32 cores, 185 GB of RAM, approx. 3 TB of attached storage <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-05-31<br />
|-<br />
| style="border-bottom:1px dotted silver;" | GWDG <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Piotr Kasprzak <br />
| style="border-bottom:1px dotted silver;" | Ramin Yahyapour, Philipp Wieder <br />
| style="border-bottom:1px dotted silver;" | GoeGrid <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka<br />
| style="border-bottom:1px dotted silver;" | Migration to newer versions of OpenStack planned / under way<br />
| style="border-bottom:1px dotted silver;" | NO<br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://api.prod.cloud.gwdg.de:5000/v3 Openstack API]<br>[https://cloud.gwdg.de/horizon Horizon dashboard with EGI AAI integration via OIDC]<br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 352 Cores with 1408 GB RAM <br />
<br />
&nbsp;- 50 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 32 cores, 64 GB RAM, 3 TB disk <br />
| style="border-bottom:1px dotted silver;" | Not ready (planned) <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-10-26<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CSIC/IFCA <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Alvaro Lopez Garcia <br />
| style="border-bottom:1px dotted silver;" | Pablo Orviz <br />
| style="border-bottom:1px dotted silver;" | IFCA-LCG2 <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Newton/Ocata/Pike/Queens <br />
| style="border-bottom:1px dotted silver;" | Upgrade to Ubuntu 18.04 and Pike during Q3 2018<br> <br />
| style="border-bottom:1px dotted silver;" | No<br> <br />
| style="border-bottom:1px dotted silver;" | KVM<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | <br />
[https://cloud.ifca.es:8787/ OCCI] <br />
<br />
[https://cloud.ifca.es:8774/ OpenStack] <br />
<br />
[https://cephrgw.ifca.es:8080/ Swift] <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
- 7744 Cores: <br />
<br />
&nbsp;&nbsp;&nbsp; - 32 nodes x 8 vcpus x 16GB RAM <br />
<br />
&nbsp;&nbsp;&nbsp; - 36 nodes x 24 vcpus x 48GM RAM <br />
<br />
&nbsp;&nbsp;&nbsp; - 34 nodes x 32 vcpus x 128GB RAM <br />
<br />
&nbsp;&nbsp;&nbsp; - 2 nodes x 80 vcpus x 1TB RAM<br> <br />
<br />
&nbsp;&nbsp;&nbsp; - 24 nodes x 32 vcpus x 32GB RAM <br />
<br />
&nbsp;&nbsp;&nbsp; - 144 nodes x 32 vcpus x 32GB RAM <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
cm4.4xlarge: 32 vCPUs, 160GB HD, 64GB RAM <br />
<br />
x1.20xlarge: 80 vCPUS, 100GB HD, 1TB RAM (upon request) <br />
<br />
GPU and Infiniband access (upon request)<br />
<br />
| style="border-bottom:1px dotted silver;" | Not ready (planned) <br />
| style="border-bottom:1px dotted silver;" | Yes <br> [https://wiki.egi.eu/wiki/Pay-for-use Price] <br />
| style="border-bottom:1px dotted silver;" | 2018-06-18<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
GRNET<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | GR <br />
| style="border-bottom:1px dotted silver;" | Kostas Koumantaros <br />
| style="border-bottom:1px dotted silver;" | &nbsp; Kyriakos Ginis <br />
| style="border-bottom:1px dotted silver;" | HG-09-Okeanos-Cloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | Synnefo <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://okeanos-occi2.hellasgrid.gr:9000/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 70 CPUs with 220 GB RAM <br />
<br />
<br> - 2 ΤΒ storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 24 Cores, 192GB RAM, 500GB HD&nbsp; <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Miroslav Dobrucky, Jan Astalos <br />
| style="border-bottom:1px dotted silver;" | IISAS-FedCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | Queens on Ubuntu18.04 <br />
| style="border-bottom:1px dotted silver;" | Planned with Queens release <br />
| style="border-bottom:1px dotted silver;" | KVM and LXD <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nova.ui.savba.sk:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 168 Cores with 1.5 GB RAM per core <br />
<br />
&nbsp;- 9 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.xlarge: 8 VCPUs, 16GB RAM, 160GB HD <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2017-11-30<br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Jan Astalos, Miroslav Dobrucky <br />
| style="border-bottom:1px dotted silver;" | IISAS-GPUCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | Queens on Ubuntu18.04 <br />
| style="border-bottom:1px dotted silver;" | Planned with Queens release <br />
| style="border-bottom:1px dotted silver;" | KVM and LXD <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nova3.ui.savba.sk:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 48 Cores with 4GB RAM per core, 6 GPUs K20m <br />
<br />
&nbsp;- 6 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <span style="color: rgb(51, 51, 51); font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 13px; line-height: 18.5714px; background-color: rgb(245, 245, 245);">gpu2cpu12</span><span style="font-size: 13.28px; line-height: 1.5em;">: 12 VCPUs, 48GB RAM, 200GB HD, 2 GPU Tesla K20m</span> <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | Yes <br> [https://wiki.egi.eu/wiki/Pay-for-use Price] <br />
| style="border-bottom:1px dotted silver;" | 2017-11-30<br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - Catania <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Roberto Barbera <br />
| style="border-bottom:1px dotted silver;" | Giuseppe Platania <br />
| style="border-bottom:1px dotted silver;" | INFN-CATANIA-STACK <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://stack-server-01.ct.infn.it:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 16 Cores with 65 GB RAM - 16 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | VM maximum size = m1.xlarge (16384 MB, 8 VCPU, 160 GB) <br />
| Ready <br />
| No<br />
| 2018-06-25<br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - Bari <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Marica Antonacci <br />
| style="border-bottom:1px dotted silver;" | Giacinto Donvito, Stefano Nicotri, Vincenzo Spinoso <br />
| style="border-bottom:1px dotted silver;" | RECAS-BARI (was:PRISMA-INFN-BARI) <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | Queens on Ubuntu 18.04. Planned but not scheduled yet <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [http://cloud.recas.ba.infn.it:8787/occi OCCI]<br>[http://cloud.recas.ba.infn.it:8774/v2.1/%(tenant_id)s Openstack Compute]<br>[http://cloud.recas.ba.infn.it:8080/v1/AUTH_%(tenant_id)s Openstack Object-Store]<br>[http://egi-cloud.recas.ba.infn.it/ Horizon dashboard with auth token] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 300 Cores with 600GB of RAM <br />
<br />
- 50 TB Storage. <br />
<br />
| style="border-bottom:1px dotted silver;" | Flavor "m1.xxlarge": 24 cores, 48GB RAM, 100 GB disk. Up to 500GB block storage can be attached on-demand. <br />
| style="border-bottom:1px dotted silver;" | Not ready yet but planned <br />
| style="border-bottom:1px dotted silver;" | Yes <br> [https://wiki.egi.eu/wiki/Pay-for-use Price] <br />
| style="border-bottom:1px dotted silver;" | 2018-07-04<br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN-Padova <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Marco Verlato <br />
| style="border-bottom:1px dotted silver;" | Federica Fanzago<br />
| style="border-bottom:1px dotted silver;" | [https://gocdb.egi.eu/portal/index.php?Page_Type=Site&id=1024 INFN-PADOVA-STACK] <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Rocky <br />
| style="border-bottom:1px dotted silver;" | Rocky is the current release.<br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | KVM<br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://egi-cloud.pd.infn.it/v3/ OpenStack Auth URL]<br>[https://egi-cloud.pd.infn.it:8443/dashboard OpenStack Horizon dashboard with EGI Check-in AAI]<br />
| style="border-bottom:1px dotted silver;" | <br />
- 184 Cores with 384 GB RAM <br />
<br />
- 2.2TB of overall block storage and 1.8TB of ephemeral storage per compute node <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
m1.hpc: 40 VCPUs, 90GB RAM, 200GB HD, up to 1TB attachable block storage <br />
<br />
Up to 24 public IPs <br />
<br />
| style="border-bottom:1px dotted silver;" | Not ready (planned) <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2019-02-04<br />
|-<br />
| style="border-bottom:1px dotted silver;" | TUBITAK ULAKBIM <br />
| style="border-bottom:1px dotted silver;" | TR <br />
| style="border-bottom:1px dotted silver;" | Feyza Eryol <br />
| style="border-bottom:1px dotted silver;" | Onur Temizsoylu <br />
| style="border-bottom:1px dotted silver;" | TR-FC1-ULAKBIM <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | NO<br> <br />
| style="border-bottom:1px dotted silver;" | KVM<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [http://fcctrl.ulakbim.gov.tr:8787/occi1.2 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 744 Cores with 3968 GB RAM&nbsp; <br />
<br />
- 40 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 4 CPUs, 16GB Memory, 40GB disk<br> <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | Yes <br> [https://wiki.egi.eu/wiki/Pay-for-use Price] <br />
| style="border-bottom:1px dotted silver;" | 2018-06-20<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | IPHC <br />
| style="border-bottom:1px dotted silver;" | FR <br />
| style="border-bottom:1px dotted silver;" | Jerome Pansanel <br />
| style="border-bottom:1px dotted silver;" | Sebastien Geiger <br />
| style="border-bottom:1px dotted silver;" | IN2P3-IRES <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Pike<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://sbgcloud.in2p3.fr:8787 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 480 Cores with 2336 GB RAM - 480 TB storage (Cinder / CEPH) <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.12xlarge-hugemem (CPU: 48, RAM: 512 GB, disk: 160 GB) <br> Monitoring&nbsp;: <br />
*https://cloudmon.egi.eu/nagios/cgi-bin/extinfo.cgi?type=1&amp;host=sbgbdii1.in2p3.fr <br />
*https://cloudmon.egi.eu/nagios/cgi-bin/extinfo.cgi?type=1&amp;host=sbgcloud.in2p3.fr<br />
<br />
| style="border-bottom:1px dotted silver;" | Planned for October 2018 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-05-31<br />
|-<br />
| style="border-bottom:1px dotted silver;" | NCG-INGRID-PT / LIP <br />
| style="border-bottom:1px dotted silver;" | PT <br />
| style="border-bottom:1px dotted silver;" | Mario David <br />
| style="border-bottom:1px dotted silver;" | Joao Pina, Joao Martins <br />
| style="border-bottom:1px dotted silver;" | NCG-INGRID-PT <br />
| style="border-bottom:1px dotted silver;" | <br />
Candidate <br />
<br />
May 2018 <br />
<br />
| style="border-bottom:1px dotted silver;" | Openstack Mitaka<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
Queens or "R", Ubuntu18.04 <br />
<br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | KVM and lxd[[]] <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [http://nimbus.ncg.ingrid.pt:8774/v2.1/%(tenant_id)s Openstack Compute]<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
- 80 Cores with 192 GB RAM <br />
<br />
- 3 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | Name m1.xlarge, VCPUs 8, Root Disk 160 GB, Ephemeral Disk 0 GB, Total Disk 160 GB, RAM 16,384 MB <br />
| style="border-bottom:1px dotted silver;" | Planned for second half of 2018 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 31.05.2018<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Polythecnic University of Valencia, I3M <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Ignacio Blanquer <br />
| style="border-bottom:1px dotted silver;" | Carlos de Alfonso <br />
| style="border-bottom:1px dotted silver;" | UPV-GRyCAP <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5 <br />
| style="border-bottom:1px dotted silver;" | Not planned yet. <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | NO <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://fc-one.i3m.upv.es:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 128 Cores with 192 GB RAM <br />
<br />
- 5 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 4 VCPUS, 16GB RAM, 160GB HD <br />
| style="border-bottom:1px dotted silver;" | By 2018 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-05-31<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Fraunhofer SCAI <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Horst Schwichtenberg <br />
| style="border-bottom:1px dotted silver;" | Andre Gemuend <br />
| style="border-bottom:1px dotted silver;" | SCAI <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | Upgrade to OpenStack Pike or Queens. Estimated upgrade Q4 2018.<br> <br />
| style="border-bottom:1px dotted silver;" | YES<br> <br />
| style="border-bottom:1px dotted silver;" | KVM<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://fc.scai.fraunhofer.de:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 128 physical cores + HT, 244 GB RAM <br />
<br />
- 20 TB Storage (Glance &amp; Cinder) <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 VCPUS, 16GB RAM, 160GB HD <br />
| style="border-bottom:1px dotted silver;" | Planned for Q4 2018 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 08.06.2018<br />
|-<br />
| style="border-bottom:1px dotted silver;" | BELSPO <br />
| style="border-bottom:1px dotted silver;" | BE<br> <br />
| style="border-bottom:1px dotted silver;" | Stephane GERARD <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | BEgrid-BELNET <br />
| style="border-bottom:1px dotted silver;" | Certified (2016-12) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5.4 <br />
| style="border-bottom:1px dotted silver;" | No OpenNebula upgrade planned. We seriously think about moving to OpenStack, but not for this year.<br> <br />
| style="border-bottom:1px dotted silver;" | Yes<br> <br />
| style="border-bottom:1px dotted silver;" | KVM<br> <br />
| style="border-bottom:1px dotted silver;" | YES<br> <br />
| style="border-bottom:1px dotted silver;" | [https://rocci.iihe.ac.be:11443/ OCCI]<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
- 160 physical cores + HT, 512 GB RAM <br />
<br />
- 10 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 16 VCPUs, 32GB of RAM, local HD 200 GB <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2017-12-11<br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Jan Astalos, Miroslav Dobrucky <br />
| style="border-bottom:1px dotted silver;" | IISAS-Nebula <br />
| style="border-bottom:1px dotted silver;" | Certified (2017-01-25) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5 <br />
| style="border-bottom:1px dotted silver;" | No plans.<br> <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | CMD-ONE-1 except occi-server (occi-server version installed is 2.0.0.alpha.1 which is needed for GPU support) <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | [https://nebula2.ui.savba.sk:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | 32 Cores with 4GB RAM per core and 4 GPUs K20m - Storage 147GB <br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 14 CPU cores, 2GPU, 56GB of RAM, 830GB HD <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2017-11-30<br />
|-<br />
| style="border-bottom:1px dotted silver;" | IFIN-HH <br />
| style="border-bottom:1px dotted silver;" | RO <br />
| style="border-bottom:1px dotted silver;" | Ionut Vasile <br />
| style="border-bottom:1px dotted silver;" | Dragos Ciobanu-Zabet <br />
| style="border-bottom:1px dotted silver;" | CLOUDIFIN <br />
| style="border-bottom:1px dotted silver;" | Certified (2017-03-01) <br />
| style="border-bottom:1px dotted silver;" | OpenStack Pike <br />
| style="border-bottom:1px dotted silver;" | Upgrade to OpenStack Queens or Rocky, whichever is supported at the time of the upgrade. Estimated upgrade period Q2 2019. <br />
| style="border-bottom:1px dotted silver;" | yes <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | Yes<br> <br />
| style="border-bottom:1px dotted silver;" | [http://cloud-ctrl.nipne.ro:8787/occi1.1 OCCI] <br />
| style="border-bottom:1px dotted silver;" | 96 Cores with 384 GB RAM <br>- 2TB Storage <br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.large(VCPUs 4, 8GB RAM, Root Disk 80GB)<br> m1.medium(VCPUs 2, 4GB RAM, Root Disk 40GB)<br> m1.nano(VCPUs 1, 64MB RAM)<br> m1.small(VCPUs 1, 2GB RAM, Root Disk 20GB)<br> m1.tiny(VCPUs 1, 512MB RAM, Root Disk 1GB)<br> m1.xlarge(VCPUs 8, 16GB RAM, Root Disk 160GB) <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-09-20<br />
|-<br />
| style="border-bottom:1px dotted silver;" | BITP <br />
| style="border-bottom:1px dotted silver;" | UA <br />
| style="border-bottom:1px dotted silver;" | Volodymyr Yurchenko <br />
| style="border-bottom:1px dotted silver;" | Volodymyr Yurchenko <br />
| style="border-bottom:1px dotted silver;" | UA-BITP <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Newton <br />
| style="border-bottom:1px dotted silver;" | Upgrade to OpenStack Queens during July-August 2018 (not possible until now due to quite intensive resource usage); a downtime will be needed <br />
| style="border-bottom:1px dotted silver;" | Yes<br> <br />
| style="border-bottom:1px dotted silver;" | KVM<br> <br />
| style="border-bottom:1px dotted silver;" | Yes<br> <br />
| style="border-bottom:1px dotted silver;" | [http://cloud-main.bitp.kiev.ua:8787 OCCI] <br> [https://cloud-main.bitp.kiev.ua:5001/v2.0 OpenStack] <br />
| style="border-bottom:1px dotted silver;" | Total 56 virtual cores (28 physical CPUs), 62 GB RAM, 2.9 TB storage, <br />
all available for fedcloud.egi.eu through an SLA <br> <br />
<br />
| style="border-bottom:1px dotted silver;" | Max size of VM: 8 cores, 16 GB RAM, 160 GB storage <br />
| style="border-bottom:1px dotted silver;" | Planned for July-August 2018 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-06-18-12-11<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | INAF-TRIESTE-STACK <br />
| style="border-bottom:1px dotted silver;" | Candidate (13-11-2017) <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Yes/No <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|}<br />
<br />
== Suspended or closed sites ==<br />
<br />
'''Last update: November 2017''' <br />
<br />
{| cellspacing="0" cellpadding="5" style="border:1px solid black; text-align:left;" class="wikitable sortable"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | Affiliation <br />
! style="border-bottom:1px solid black;" | CC <br />
! style="border-bottom:1px solid black;" | Main contact <br />
! style="border-bottom:1px solid black;" | Deputies <br />
! style="border-bottom:1px solid black;" | Host Site <br />
! style="border-bottom:1px solid black;" | Production status <br />
! style="border-bottom:1px solid black;" | CMF Version <br />
! style="border-bottom:1px solid black;" | Supporting fedcloud.egi.eu <br />
! style="border-bottom:1px solid black;" | Resource Endpoints <br />
! style="border-bottom:1px solid black;" | Committed Production Launch resources <br />
! style="border-bottom:1px solid black;" | VM maximum size (memory, cpu, storage) <br />
! style="border-bottom:1px solid black;" | IPv6 readiness of the network at the site <br />
! style="border-bottom:1px solid black;" | Last update timestamp YYY-MM-DD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
UNIZAR / BIFI <br />
<br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Ruben Valles <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | BIFI <br />
| style="border-bottom:1px dotted silver;" | Suspended (2017-08-01) <br />
| style="border-bottom:1px dotted silver;" | OpenStack (Newton / Icehouse) <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | <br />
[http://server4-ciencias.bifi.unizar.es:8787 OCCI] (Newton)<br> [http://server4-eupt.unizar.es:8787 OCCI] (Icehouse) <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
- 720 Cores with 740 GB RAM <br />
<br />
- 10 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
'''Newton''': m1.xlarge, VCPUs 8, Root Disk 20 GB, Ephemeral Disk 0 GB, Total Disk 20 GB, RAM 16,384 MB '''Icehouse''': Name m1.xlarge, VCPUs 8, Root Disk 160 GB, Ephemeral Disk 0 GB, Total Disk 160 GB, RAM 16,384 MB <br />
<br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" rowspan="3" | INFN - Catania <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | IT <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | Roberto Barbera <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | Giuseppe La Rocca, Diego Scardac, Giuseppe Platania <br />
| style="border-bottom:1px dotted silver;" | INFN-CATANIA-NEBULA <br />
| style="border-bottom:1px dotted silver;" | Suspended (2017-04-27) https://ggus.eu/index.php?mode=ticket_info&amp;ticket_id=126716 <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nebula-server-01.ct.infn.it:9000/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 16 cores, 64 GB RAM <br />
<br />
- 5.4 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | INDIGO-CATANIA-STACK <br />
| style="border-bottom:1px dotted silver;" | Closed (2017-05-02) <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | CETA-CIEMAT <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Guillermo Díaz Herrero, <br />
| style="border-bottom:1px dotted silver;" | <br />
Alfonso Pardo Diaz <br />
<br />
Abel Francisco Paz<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | CETA-GRID <br />
| style="border-bottom:1px dotted silver;" | Cloud resources dismissed (2017-06-09) <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | FZ Jülich <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Björn Hagemeier <br />
| style="border-bottom:1px dotted silver;" | Shahbaz Memon <br />
| style="border-bottom:1px dotted silver;" | FZJ <br />
| style="border-bottom:1px dotted silver;" | Cloud resources dismissed (Dec 2017)<br />
|-<br />
| style="border-bottom:1px dotted silver;" | UKIM <br />
| style="border-bottom:1px dotted silver;" | MK <br />
| style="border-bottom:1px dotted silver;" | Boro Jakimovski <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | MK-04-FINKICLOUD <br />
| style="border-bottom:1px dotted silver;" | Suspended (2017-10-12) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5.2 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nebula.finki.ukim.mk:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 100 Cores with 24 GB RAM - 1 TB Storage <br />
<br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 4 CPUs, 8GB Memory, 200GB disk<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|}<br />
<br />
[[Category:Federated_Cloud]]</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Federated_Cloud_infrastructure_status&diff=101011
Federated Cloud infrastructure status
2019-02-04T10:36:06Z
<p>Verlato: /* Status of the Federated Cloud */</p>
<hr />
<div>{{Fedcloud_Menu}} {{TOC_right}} <br />
<br />
The purposes of this page are <br />
<br />
*providing a snapshot of the resources that are provided by the Federated Cloud infrastructure <br />
*providing information about the sites that are joining, or have expressed interest in joining the FedCloud <br />
*providing the list of sites supporting the fedcloud.egi.eu VO, which is the VO used to allow the evaluation of the FedCloud infrastructure by a given provider<br />
<br />
If you have any comments on the content of this page, please contact '''operations @ egi.eu'''. <br />
<br />
== CMF support highlights ==<br />
<br />
'''Last update: April 2018''' <br />
<br />
{| cellspacing="1" cellpadding="1" width="1553" border="1"<br />
|-<br />
! scope="col" | CMF <br />
! scope="col" | Version <br />
! scope="col" | Comments<br />
|-<br />
| OpenStack <br />
| bgcolor="#66ff99" align="center" | &gt;=Ocata <br />
| here https://releases.openstack.org/ you find that Ocata and above are "Maintained", meaning "Approximately 18 months" supported since the release date. For instance, for Ocata, released on 2017-02-22, this means approximately until August 2018<br />
|-<br />
| OpenStack <br />
| bgcolor="#66ff99" align="center" | Mitaka/Ubuntu LTS <br />
| some RCs are also using Mitaka from Ubuntu LTS, which is supported for 5 years. Due to project constraints, some of these could decide to switch to a new version, or stay with the current Mitaka/Ubuntu set up as planned last year; so '''we will ask about plans once again, to confirm they will stay with Mitaka or otherwise'''<br />
|-<br />
| OpenStack <br />
| bgcolor="#ff6666" align="center" | &lt;Ocata other than Mitaka/Ubuntu LTS <br />
| there are situations where an outdated version of OpenStack is used. '''These RCs are violating the [https://documents.egi.eu/public/ShowDocument?docid=669 Service Operations Security Policy] which states that resource centres SHOULD NOT run unsupported software in their production infrastructure. We will open tickets against those sites to plan an upgrade'''<br />
|-<br />
| OpenNebula <br />
| bgcolor="#ff6666" align="center" | 4 <br />
| no longer supported, OpenNebula RCs need to update to OpenNebula 5. '''These RCs are violating the the [https://documents.egi.eu/public/ShowDocument?docid=669 Service Operations Security Policy] which states that resource centres SHOULD NOT run unsupported software in their production infrastructure. We will open tickets against those sites to plan an upgrade'''<br />
|-<br />
| OpenNebula <br />
| bgcolor="#66ff99" align="center" | 5 <br />
| Supported<br />
|-<br />
| Synnefo <br />
| bgcolor="#66ff99" align="center" | <br> <br />
| Supported<br />
|}<br />
<br />
== Status of the Federated Cloud ==<br />
<br />
The table here shows all Resource Centres fully integrated into the Federated Cloud infrastructure and certified through the [[PROC09|EGI Resource Centre Registration and Certification]]. <br />
<br />
The status of all the services is monitored via the [https://argo-mon.egi.eu/nagios/ argo-mon.egi.eu nagios instance] and [https://argo-mon2.egi.eu/nagios/ argo-mon2.egi.eu nagios instance]. <br />
<br />
Details on Resource Centres availability and reliability are available on [http://argo.egi.eu/lavoisier/cloud_reports?accept=html ARGO]. <br />
<br />
Accounting data are available on the [http://accounting.egi.eu/cloud.php EGI Accounting Portal] or in the [http://accounting-devel.egi.eu/cloud.php accounting portal dev instance]. <br />
<br />
'''Last update: June 2018 http://go.egi.eu/fedcloudstatus''' <br />
<br />
{| style="border:1px solid black; text-align:left;" class="wikitable sortable" cellspacing="0" cellpadding="5"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | Affiliation <br />
! style="border-bottom:1px solid black;" | CC <br />
! style="border-bottom:1px solid black;" | Main contact <br />
! style="border-bottom:1px solid black;" | Deputies <br />
! style="border-bottom:1px solid black;" | Host Site <br />
! style="border-bottom:1px solid black;" | Production status <br />
! style="border-bottom:1px solid black;" | CMF Version <br />
! style="border-bottom:1px solid black;" | CMF Upgrade plans <br />
! style="border-bottom:1px solid black;" | Using CMD <br />
! style="border-bottom:1px solid black;" | KVM/XEN? <br />
! style="border-bottom:1px solid black;" | Supporting fedcloud.egi.eu <br />
! style="border-bottom:1px solid black;" | Resource Endpoints <br />
! style="border-bottom:1px solid black;" | Committed Production Launch resources <br />
! style="border-bottom:1px solid black;" | VM maximum size (memory, cpu, storage) <br />
! style="border-bottom:1px solid black;" | IPv6 readiness of the network at the site <br />
! style="border-bottom:1px solid black;" | Pay-per-use <br />
! style="border-bottom:1px solid black;" | Last update timestamp YYYY-MM-DD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
100 Percent IT Ltd <br />
<br />
| style="border-bottom:1px dotted silver;" | UK <br />
| style="border-bottom:1px dotted silver;" | David Blundell <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | 100IT <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka&nbsp; <br />
| style="border-bottom:1px dotted silver;" | Queens on Ubuntu18.04 <br />
| style="border-bottom:1px dotted silver;" | Planned with Queens release <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://occi-api.100percentit.com:8787/occi1.1 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 120 Cores with 128 GB RAM <br />
<br />
- 16TB Shared storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 cores, 32 GB of RAM, up to 1TB attachable block storage<br> <br />
| style="border-bottom:1px dotted silver;" | Network ready, integration planned for Queens release <br />
| style="border-bottom:1px dotted silver;" | Yes <br> [https://wiki.egi.eu/wiki/Pay-for-use Price] <br />
| style="border-bottom:1px dotted silver;" | 2017-12-04<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Cyfronet (NGI PL) <br />
| style="border-bottom:1px dotted silver;" | PL <br />
| style="border-bottom:1px dotted silver;" | Tomasz Szepieniec <br />
| style="border-bottom:1px dotted silver;" | Patryk Lasoń, Łukasz Flis <br />
| style="border-bottom:1px dotted silver;" | [https://next.gocdb.eu/portal/index.php?Page_Type=Site&id=966 CYFRONET-CLOUD] <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Juno <br />
| style="border-bottom:1px dotted silver;" | Queens in August 2018 <br />
| style="border-bottom:1px dotted silver;" | Yes <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://control.cloud.cyfronet.pl:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 200 Cores with 400 GB RAM <br />
<br />
- 5 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 cores, 16 GB of RAM, 10 GB <br />
| style="border-bottom:1px dotted silver;" | Not ready (planned) <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-07-11<br />
|-<br />
| style="border-bottom:1px dotted silver;" | FCTSG <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Rubén Díez <br />
| style="border-bottom:1px dotted silver;" | Iván Díaz <br />
| style="border-bottom:1px dotted silver;" | CESGA <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5.4.6 <br />
| style="border-bottom:1px dotted silver;" | No in near future. Seems that is the newer version compatible with CMD.<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
Yes (CMD-ONE-1) <br />
<br />
Except occi-server (2.0.4) and early adopter of cloudkeeper/cloudkeeper-one (1.6.0/1.3.0)<br />
<br />
| style="border-bottom:1px dotted silver;" | KVM<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://cloud.cesga.es:3202/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- <span class="right" id="dashboard_host_allocated_cpu_str">192 Cores total<br />
</span> <br />
<br />
<span class="right">- </span><span class="right" id="dashboard_host_allocated_mem_str">340 GB of RAM total<br />
</span><br> <br />
<br />
<span class="right">- </span>Two data stores of 3TB and 700GB. <br />
<br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 cores, 16GB of RAM, local HD 100 GB <br />
| style="border-bottom:1px dotted silver;" | &nbsp; Ready (network level) <br />
| style="border-bottom:1px dotted silver;" | Yes <br> [https://wiki.egi.eu/wiki/Pay-for-use Price] <br />
| style="border-bottom:1px dotted silver;" | 2018-04-23<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CESNET <br />
| style="border-bottom:1px dotted silver;" | CZ <br />
| style="border-bottom:1px dotted silver;" | Miroslav Ruda <br />
| style="border-bottom:1px dotted silver;" | Boris Parak <br />
| style="border-bottom:1px dotted silver;" | CESNET-MetaCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4.14.2 <br />
| style="border-bottom:1px dotted silver;" | Migration to OpenStack Queens/Rocky with OpenID Connect authN/authZ. ETA Q1/2019. <br />
| style="border-bottom:1px dotted silver;" | NO <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://carach5.ics.muni.cz:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 416 Cores with 2.4 TB RAM <br />
<br />
- 56.6 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 32 cores, 185 GB of RAM, approx. 3 TB of attached storage <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-05-31<br />
|-<br />
| style="border-bottom:1px dotted silver;" | GWDG <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Piotr Kasprzak <br />
| style="border-bottom:1px dotted silver;" | Ramin Yahyapour, Philipp Wieder <br />
| style="border-bottom:1px dotted silver;" | GoeGrid <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka<br />
| style="border-bottom:1px dotted silver;" | Migration to newer versions of OpenStack planned / under way<br />
| style="border-bottom:1px dotted silver;" | NO<br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://api.prod.cloud.gwdg.de:5000/v3 Openstack API]<br>[https://cloud.gwdg.de/horizon Horizon dashboard with EGI AAI integration via OIDC]<br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 352 Cores with 1408 GB RAM <br />
<br />
&nbsp;- 50 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 32 cores, 64 GB RAM, 3 TB disk <br />
| style="border-bottom:1px dotted silver;" | Not ready (planned) <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-10-26<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CSIC/IFCA <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Alvaro Lopez Garcia <br />
| style="border-bottom:1px dotted silver;" | Pablo Orviz <br />
| style="border-bottom:1px dotted silver;" | IFCA-LCG2 <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Newton/Ocata/Pike/Queens <br />
| style="border-bottom:1px dotted silver;" | Upgrade to Ubuntu 18.04 and Pike during Q3 2018<br> <br />
| style="border-bottom:1px dotted silver;" | No<br> <br />
| style="border-bottom:1px dotted silver;" | KVM<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | <br />
[https://cloud.ifca.es:8787/ OCCI] <br />
<br />
[https://cloud.ifca.es:8774/ OpenStack] <br />
<br />
[https://cephrgw.ifca.es:8080/ Swift] <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
- 7744 Cores: <br />
<br />
&nbsp;&nbsp;&nbsp; - 32 nodes x 8 vcpus x 16GB RAM <br />
<br />
&nbsp;&nbsp;&nbsp; - 36 nodes x 24 vcpus x 48GM RAM <br />
<br />
&nbsp;&nbsp;&nbsp; - 34 nodes x 32 vcpus x 128GB RAM <br />
<br />
&nbsp;&nbsp;&nbsp; - 2 nodes x 80 vcpus x 1TB RAM<br> <br />
<br />
&nbsp;&nbsp;&nbsp; - 24 nodes x 32 vcpus x 32GB RAM <br />
<br />
&nbsp;&nbsp;&nbsp; - 144 nodes x 32 vcpus x 32GB RAM <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
cm4.4xlarge: 32 vCPUs, 160GB HD, 64GB RAM <br />
<br />
x1.20xlarge: 80 vCPUS, 100GB HD, 1TB RAM (upon request) <br />
<br />
GPU and Infiniband access (upon request)<br />
<br />
| style="border-bottom:1px dotted silver;" | Not ready (planned) <br />
| style="border-bottom:1px dotted silver;" | Yes <br> [https://wiki.egi.eu/wiki/Pay-for-use Price] <br />
| style="border-bottom:1px dotted silver;" | 2018-06-18<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
GRNET<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | GR <br />
| style="border-bottom:1px dotted silver;" | Kostas Koumantaros <br />
| style="border-bottom:1px dotted silver;" | &nbsp; Kyriakos Ginis <br />
| style="border-bottom:1px dotted silver;" | HG-09-Okeanos-Cloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | Synnefo <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://okeanos-occi2.hellasgrid.gr:9000/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 70 CPUs with 220 GB RAM <br />
<br />
<br> - 2 ΤΒ storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 24 Cores, 192GB RAM, 500GB HD&nbsp; <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Miroslav Dobrucky, Jan Astalos <br />
| style="border-bottom:1px dotted silver;" | IISAS-FedCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | Queens on Ubuntu18.04 <br />
| style="border-bottom:1px dotted silver;" | Planned with Queens release <br />
| style="border-bottom:1px dotted silver;" | KVM and LXD <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nova.ui.savba.sk:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 168 Cores with 1.5 GB RAM per core <br />
<br />
&nbsp;- 9 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.xlarge: 8 VCPUs, 16GB RAM, 160GB HD <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2017-11-30<br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Jan Astalos, Miroslav Dobrucky <br />
| style="border-bottom:1px dotted silver;" | IISAS-GPUCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | Queens on Ubuntu18.04 <br />
| style="border-bottom:1px dotted silver;" | Planned with Queens release <br />
| style="border-bottom:1px dotted silver;" | KVM and LXD <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nova3.ui.savba.sk:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 48 Cores with 4GB RAM per core, 6 GPUs K20m <br />
<br />
&nbsp;- 6 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <span style="color: rgb(51, 51, 51); font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 13px; line-height: 18.5714px; background-color: rgb(245, 245, 245);">gpu2cpu12</span><span style="font-size: 13.28px; line-height: 1.5em;">: 12 VCPUs, 48GB RAM, 200GB HD, 2 GPU Tesla K20m</span> <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | Yes <br> [https://wiki.egi.eu/wiki/Pay-for-use Price] <br />
| style="border-bottom:1px dotted silver;" | 2017-11-30<br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - Catania <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Roberto Barbera <br />
| style="border-bottom:1px dotted silver;" | Giuseppe Platania <br />
| style="border-bottom:1px dotted silver;" | INFN-CATANIA-STACK <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://stack-server-01.ct.infn.it:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 16 Cores with 65 GB RAM - 16 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | VM maximum size = m1.xlarge (16384 MB, 8 VCPU, 160 GB) <br />
| Ready <br />
| No<br />
| 2018-06-25<br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - Bari <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Marica Antonacci <br />
| style="border-bottom:1px dotted silver;" | Giacinto Donvito, Stefano Nicotri, Vincenzo Spinoso <br />
| style="border-bottom:1px dotted silver;" | RECAS-BARI (was:PRISMA-INFN-BARI) <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | Queens on Ubuntu 18.04. Planned but not scheduled yet <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [http://cloud.recas.ba.infn.it:8787/occi OCCI]<br>[http://cloud.recas.ba.infn.it:8774/v2.1/%(tenant_id)s Openstack Compute]<br>[http://cloud.recas.ba.infn.it:8080/v1/AUTH_%(tenant_id)s Openstack Object-Store]<br>[http://egi-cloud.recas.ba.infn.it/ Horizon dashboard with auth token] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 300 Cores with 600GB of RAM <br />
<br />
- 50 TB Storage. <br />
<br />
| style="border-bottom:1px dotted silver;" | Flavor "m1.xxlarge": 24 cores, 48GB RAM, 100 GB disk. Up to 500GB block storage can be attached on-demand. <br />
| style="border-bottom:1px dotted silver;" | Not ready yet but planned <br />
| style="border-bottom:1px dotted silver;" | Yes <br> [https://wiki.egi.eu/wiki/Pay-for-use Price] <br />
| style="border-bottom:1px dotted silver;" | 2018-07-04<br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN-Padova <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Marco Verlato <br />
| style="border-bottom:1px dotted silver;" | Federica Fanzago<br />
| style="border-bottom:1px dotted silver;" | [https://next.gocdb.eu/portal/index.php?Page_Type=Site&id=1024 INFN-PADOVA-STACK] <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Rocky <br />
| style="border-bottom:1px dotted silver;" | Rocky is the current release.<br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | KVM<br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://egi-cloud.pd.infn.it/v3/ Keystone]<br>[https://egi-cloud.pd.infn.it:8443/dashboard Horizon dashboard with EGI Check-in AAI]<br />
| style="border-bottom:1px dotted silver;" | <br />
- 184 Cores with 384 GB RAM <br />
<br />
- 2.2TB of overall block storage and 1.8TB of ephemeral storage per compute node <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
m1.hpc: 40 VCPUs, 90GB RAM, 200GB HD, up to 1TB attachable block storage <br />
<br />
Up to 24 public IPs <br />
<br />
| style="border-bottom:1px dotted silver;" | Not ready (planned) <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2019-02-04<br />
|-<br />
| style="border-bottom:1px dotted silver;" | TUBITAK ULAKBIM <br />
| style="border-bottom:1px dotted silver;" | TR <br />
| style="border-bottom:1px dotted silver;" | Feyza Eryol <br />
| style="border-bottom:1px dotted silver;" | Onur Temizsoylu <br />
| style="border-bottom:1px dotted silver;" | TR-FC1-ULAKBIM <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | NO<br> <br />
| style="border-bottom:1px dotted silver;" | KVM<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [http://fcctrl.ulakbim.gov.tr:8787/occi1.2 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 744 Cores with 3968 GB RAM&nbsp; <br />
<br />
- 40 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 4 CPUs, 16GB Memory, 40GB disk<br> <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | Yes <br> [https://wiki.egi.eu/wiki/Pay-for-use Price] <br />
| style="border-bottom:1px dotted silver;" | 2018-06-20<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | IPHC <br />
| style="border-bottom:1px dotted silver;" | FR <br />
| style="border-bottom:1px dotted silver;" | Jerome Pansanel <br />
| style="border-bottom:1px dotted silver;" | Sebastien Geiger <br />
| style="border-bottom:1px dotted silver;" | IN2P3-IRES <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Pike<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://sbgcloud.in2p3.fr:8787 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 480 Cores with 2336 GB RAM - 480 TB storage (Cinder / CEPH) <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.12xlarge-hugemem (CPU: 48, RAM: 512 GB, disk: 160 GB) <br> Monitoring&nbsp;: <br />
*https://cloudmon.egi.eu/nagios/cgi-bin/extinfo.cgi?type=1&amp;host=sbgbdii1.in2p3.fr <br />
*https://cloudmon.egi.eu/nagios/cgi-bin/extinfo.cgi?type=1&amp;host=sbgcloud.in2p3.fr<br />
<br />
| style="border-bottom:1px dotted silver;" | Planned for October 2018 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-05-31<br />
|-<br />
| style="border-bottom:1px dotted silver;" | NCG-INGRID-PT / LIP <br />
| style="border-bottom:1px dotted silver;" | PT <br />
| style="border-bottom:1px dotted silver;" | Mario David <br />
| style="border-bottom:1px dotted silver;" | Joao Pina, Joao Martins <br />
| style="border-bottom:1px dotted silver;" | NCG-INGRID-PT <br />
| style="border-bottom:1px dotted silver;" | <br />
Candidate <br />
<br />
May 2018 <br />
<br />
| style="border-bottom:1px dotted silver;" | Openstack Mitaka<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
Queens or "R", Ubuntu18.04 <br />
<br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | KVM and lxd[[]] <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [http://nimbus.ncg.ingrid.pt:8774/v2.1/%(tenant_id)s Openstack Compute]<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
- 80 Cores with 192 GB RAM <br />
<br />
- 3 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | Name m1.xlarge, VCPUs 8, Root Disk 160 GB, Ephemeral Disk 0 GB, Total Disk 160 GB, RAM 16,384 MB <br />
| style="border-bottom:1px dotted silver;" | Planned for second half of 2018 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 31.05.2018<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Polythecnic University of Valencia, I3M <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Ignacio Blanquer <br />
| style="border-bottom:1px dotted silver;" | Carlos de Alfonso <br />
| style="border-bottom:1px dotted silver;" | UPV-GRyCAP <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5 <br />
| style="border-bottom:1px dotted silver;" | Not planned yet. <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | NO <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://fc-one.i3m.upv.es:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 128 Cores with 192 GB RAM <br />
<br />
- 5 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 4 VCPUS, 16GB RAM, 160GB HD <br />
| style="border-bottom:1px dotted silver;" | By 2018 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-05-31<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Fraunhofer SCAI <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Horst Schwichtenberg <br />
| style="border-bottom:1px dotted silver;" | Andre Gemuend <br />
| style="border-bottom:1px dotted silver;" | SCAI <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | Upgrade to OpenStack Pike or Queens. Estimated upgrade Q4 2018.<br> <br />
| style="border-bottom:1px dotted silver;" | YES<br> <br />
| style="border-bottom:1px dotted silver;" | KVM<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://fc.scai.fraunhofer.de:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 128 physical cores + HT, 244 GB RAM <br />
<br />
- 20 TB Storage (Glance &amp; Cinder) <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 VCPUS, 16GB RAM, 160GB HD <br />
| style="border-bottom:1px dotted silver;" | Planned for Q4 2018 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 08.06.2018<br />
|-<br />
| style="border-bottom:1px dotted silver;" | BELSPO <br />
| style="border-bottom:1px dotted silver;" | BE<br> <br />
| style="border-bottom:1px dotted silver;" | Stephane GERARD <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | BEgrid-BELNET <br />
| style="border-bottom:1px dotted silver;" | Certified (2016-12) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5.4 <br />
| style="border-bottom:1px dotted silver;" | No OpenNebula upgrade planned. We seriously think about moving to OpenStack, but not for this year.<br> <br />
| style="border-bottom:1px dotted silver;" | Yes<br> <br />
| style="border-bottom:1px dotted silver;" | KVM<br> <br />
| style="border-bottom:1px dotted silver;" | YES<br> <br />
| style="border-bottom:1px dotted silver;" | [https://rocci.iihe.ac.be:11443/ OCCI]<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
- 160 physical cores + HT, 512 GB RAM <br />
<br />
- 10 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 16 VCPUs, 32GB of RAM, local HD 200 GB <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2017-12-11<br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Jan Astalos, Miroslav Dobrucky <br />
| style="border-bottom:1px dotted silver;" | IISAS-Nebula <br />
| style="border-bottom:1px dotted silver;" | Certified (2017-01-25) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5 <br />
| style="border-bottom:1px dotted silver;" | No plans.<br> <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | CMD-ONE-1 except occi-server (occi-server version installed is 2.0.0.alpha.1 which is needed for GPU support) <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | [https://nebula2.ui.savba.sk:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | 32 Cores with 4GB RAM per core and 4 GPUs K20m - Storage 147GB <br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 14 CPU cores, 2GPU, 56GB of RAM, 830GB HD <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2017-11-30<br />
|-<br />
| style="border-bottom:1px dotted silver;" | IFIN-HH <br />
| style="border-bottom:1px dotted silver;" | RO <br />
| style="border-bottom:1px dotted silver;" | Ionut Vasile <br />
| style="border-bottom:1px dotted silver;" | Dragos Ciobanu-Zabet <br />
| style="border-bottom:1px dotted silver;" | CLOUDIFIN <br />
| style="border-bottom:1px dotted silver;" | Certified (2017-03-01) <br />
| style="border-bottom:1px dotted silver;" | OpenStack Pike <br />
| style="border-bottom:1px dotted silver;" | Upgrade to OpenStack Queens or Rocky, whichever is supported at the time of the upgrade. Estimated upgrade period Q2 2019. <br />
| style="border-bottom:1px dotted silver;" | yes <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | Yes<br> <br />
| style="border-bottom:1px dotted silver;" | [http://cloud-ctrl.nipne.ro:8787/occi1.1 OCCI] <br />
| style="border-bottom:1px dotted silver;" | 96 Cores with 384 GB RAM <br>- 2TB Storage <br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.large(VCPUs 4, 8GB RAM, Root Disk 80GB)<br> m1.medium(VCPUs 2, 4GB RAM, Root Disk 40GB)<br> m1.nano(VCPUs 1, 64MB RAM)<br> m1.small(VCPUs 1, 2GB RAM, Root Disk 20GB)<br> m1.tiny(VCPUs 1, 512MB RAM, Root Disk 1GB)<br> m1.xlarge(VCPUs 8, 16GB RAM, Root Disk 160GB) <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-09-20<br />
|-<br />
| style="border-bottom:1px dotted silver;" | BITP <br />
| style="border-bottom:1px dotted silver;" | UA <br />
| style="border-bottom:1px dotted silver;" | Volodymyr Yurchenko <br />
| style="border-bottom:1px dotted silver;" | Volodymyr Yurchenko <br />
| style="border-bottom:1px dotted silver;" | UA-BITP <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Newton <br />
| style="border-bottom:1px dotted silver;" | Upgrade to OpenStack Queens during July-August 2018 (not possible until now due to quite intensive resource usage); a downtime will be needed <br />
| style="border-bottom:1px dotted silver;" | Yes<br> <br />
| style="border-bottom:1px dotted silver;" | KVM<br> <br />
| style="border-bottom:1px dotted silver;" | Yes<br> <br />
| style="border-bottom:1px dotted silver;" | [http://cloud-main.bitp.kiev.ua:8787 OCCI] <br> [https://cloud-main.bitp.kiev.ua:5001/v2.0 OpenStack] <br />
| style="border-bottom:1px dotted silver;" | Total 56 virtual cores (28 physical CPUs), 62 GB RAM, 2.9 TB storage, <br />
all available for fedcloud.egi.eu through an SLA <br> <br />
<br />
| style="border-bottom:1px dotted silver;" | Max size of VM: 8 cores, 16 GB RAM, 160 GB storage <br />
| style="border-bottom:1px dotted silver;" | Planned for July-August 2018 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-06-18-12-11<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | INAF-TRIESTE-STACK <br />
| style="border-bottom:1px dotted silver;" | Candidate (13-11-2017) <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Yes/No <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|}<br />
<br />
== Suspended or closed sites ==<br />
<br />
'''Last update: November 2017''' <br />
<br />
{| cellspacing="0" cellpadding="5" style="border:1px solid black; text-align:left;" class="wikitable sortable"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | Affiliation <br />
! style="border-bottom:1px solid black;" | CC <br />
! style="border-bottom:1px solid black;" | Main contact <br />
! style="border-bottom:1px solid black;" | Deputies <br />
! style="border-bottom:1px solid black;" | Host Site <br />
! style="border-bottom:1px solid black;" | Production status <br />
! style="border-bottom:1px solid black;" | CMF Version <br />
! style="border-bottom:1px solid black;" | Supporting fedcloud.egi.eu <br />
! style="border-bottom:1px solid black;" | Resource Endpoints <br />
! style="border-bottom:1px solid black;" | Committed Production Launch resources <br />
! style="border-bottom:1px solid black;" | VM maximum size (memory, cpu, storage) <br />
! style="border-bottom:1px solid black;" | IPv6 readiness of the network at the site <br />
! style="border-bottom:1px solid black;" | Last update timestamp YYY-MM-DD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
UNIZAR / BIFI <br />
<br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Ruben Valles <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | BIFI <br />
| style="border-bottom:1px dotted silver;" | Suspended (2017-08-01) <br />
| style="border-bottom:1px dotted silver;" | OpenStack (Newton / Icehouse) <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | <br />
[http://server4-ciencias.bifi.unizar.es:8787 OCCI] (Newton)<br> [http://server4-eupt.unizar.es:8787 OCCI] (Icehouse) <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
- 720 Cores with 740 GB RAM <br />
<br />
- 10 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
'''Newton''': m1.xlarge, VCPUs 8, Root Disk 20 GB, Ephemeral Disk 0 GB, Total Disk 20 GB, RAM 16,384 MB '''Icehouse''': Name m1.xlarge, VCPUs 8, Root Disk 160 GB, Ephemeral Disk 0 GB, Total Disk 160 GB, RAM 16,384 MB <br />
<br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" rowspan="3" | INFN - Catania <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | IT <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | Roberto Barbera <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | Giuseppe La Rocca, Diego Scardac, Giuseppe Platania <br />
| style="border-bottom:1px dotted silver;" | INFN-CATANIA-NEBULA <br />
| style="border-bottom:1px dotted silver;" | Suspended (2017-04-27) https://ggus.eu/index.php?mode=ticket_info&amp;ticket_id=126716 <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nebula-server-01.ct.infn.it:9000/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 16 cores, 64 GB RAM <br />
<br />
- 5.4 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | INDIGO-CATANIA-STACK <br />
| style="border-bottom:1px dotted silver;" | Closed (2017-05-02) <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | CETA-CIEMAT <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Guillermo Díaz Herrero, <br />
| style="border-bottom:1px dotted silver;" | <br />
Alfonso Pardo Diaz <br />
<br />
Abel Francisco Paz<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | CETA-GRID <br />
| style="border-bottom:1px dotted silver;" | Cloud resources dismissed (2017-06-09) <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | FZ Jülich <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Björn Hagemeier <br />
| style="border-bottom:1px dotted silver;" | Shahbaz Memon <br />
| style="border-bottom:1px dotted silver;" | FZJ <br />
| style="border-bottom:1px dotted silver;" | Cloud resources dismissed (Dec 2017)<br />
|-<br />
| style="border-bottom:1px dotted silver;" | UKIM <br />
| style="border-bottom:1px dotted silver;" | MK <br />
| style="border-bottom:1px dotted silver;" | Boro Jakimovski <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | MK-04-FINKICLOUD <br />
| style="border-bottom:1px dotted silver;" | Suspended (2017-10-12) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5.2 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nebula.finki.ukim.mk:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 100 Cores with 24 GB RAM - 1 TB Storage <br />
<br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 4 CPUs, 8GB Memory, 200GB disk<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|}<br />
<br />
[[Category:Federated_Cloud]]</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Federated_Cloud_infrastructure_status&diff=99627
Federated Cloud infrastructure status
2018-07-02T09:16:23Z
<p>Verlato: </p>
<hr />
<div>{{Fedcloud_Menu}} {{TOC_right}} <br />
<br />
The purposes of this page are <br />
<br />
*providing a snapshot of the resources that are provided by the Federated Cloud infrastructure <br />
*providing information about the sites that are joining, or have expressed interest in joining the FedCloud <br />
*providing the list of sites supporting the fedcloud.egi.eu VO, which is the VO used to allow the evaluation of the FedCloud infrastructure by a given provider<br />
<br />
If you have any comments on the content of this page, please contact '''operations @ egi.eu'''. <br />
<br />
== CMF support highlights ==<br />
<br />
'''Last update: April 2018''' <br />
<br />
{| cellspacing="1" cellpadding="1" width="1553" border="1"<br />
|-<br />
! scope="col" | CMF <br />
! scope="col" | Version <br />
! scope="col" | Comments<br />
|-<br />
| OpenStack <br />
| bgcolor="#66ff99" align="center" | &gt;=Ocata <br />
| here https://releases.openstack.org/ you find that Ocata and above are "Maintained", meaning "Approximately 18 months" supported since the release date. For instance, for Ocata, released on 2017-02-22, this means approximately until August 2018<br />
|-<br />
| OpenStack <br />
| bgcolor="#66ff99" align="center" | Mitaka/Ubuntu LTS <br />
| some RCs are also using Mitaka from Ubuntu LTS, which is supported for 5 years. Due to project constraints, some of these could decide to switch to a new version, or stay with the current Mitaka/Ubuntu set up as planned last year; so '''we will ask about plans once again, to confirm they will stay with Mitaka or otherwise'''<br />
|-<br />
| OpenStack <br />
| bgcolor="#ff6666" align="center" | &lt;Ocata other than Mitaka/Ubuntu LTS <br />
| there are situations where an outdated version of OpenStack is used. '''These RCs are violating the [https://documents.egi.eu/public/ShowDocument?docid=669 Service Operations Security Policy] which states that resource centres SHOULD NOT run unsupported software in their production infrastructure. We will open tickets against those sites to plan an upgrade'''<br />
|-<br />
| OpenNebula <br />
| bgcolor="#ff6666" align="center" | 4 <br />
| no longer supported, OpenNebula RCs need to update to OpenNebula 5. '''These RCs are violating the the [https://documents.egi.eu/public/ShowDocument?docid=669 Service Operations Security Policy] which states that resource centres SHOULD NOT run unsupported software in their production infrastructure. We will open tickets against those sites to plan an upgrade'''<br />
|-<br />
| OpenNebula <br />
| bgcolor="#66ff99" align="center" | 5 <br />
| Supported<br />
|-<br />
| Synnefo <br />
| bgcolor="#66ff99" align="center" | <br> <br />
| Supported<br />
|}<br />
<br />
== Status of the Federated Cloud ==<br />
<br />
The table here shows all Resource Centres fully integrated into the Federated Cloud infrastructure and certified through the [[PROC09|EGI Resource Centre Registration and Certification]]. <br />
<br />
The status of all the services is monitored via the [https://argo-mon.egi.eu/nagios/ argo-mon.egi.eu nagios instance] and [https://argo-mon2.egi.eu/nagios/ argo-mon2.egi.eu nagios instance]. <br />
<br />
Details on Resource Centres availability and reliability are available on [http://argo.egi.eu/lavoisier/cloud_reports?accept=html ARGO]. <br />
<br />
Accounting data are available on the [http://accounting.egi.eu/cloud.php EGI Accounting Portal] or in the [http://accounting-devel.egi.eu/cloud.php accounting portal dev instance]. <br />
<br />
'''Last update: December 2017 - UPDATE&nbsp;IN&nbsp;PROGRESS http://go.egi.eu/fedcloudstatus''' <br />
<br />
{| style="border:1px solid black; text-align:left;" class="wikitable sortable" cellspacing="0" cellpadding="5"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | Affiliation <br />
! style="border-bottom:1px solid black;" | CC <br />
! style="border-bottom:1px solid black;" | Main contact <br />
! style="border-bottom:1px solid black;" | Deputies <br />
! style="border-bottom:1px solid black;" | Host Site <br />
! style="border-bottom:1px solid black;" | Production status <br />
! style="border-bottom:1px solid black;" | CMF Version <br />
! style="border-bottom:1px solid black;" | CMF Upgrade plans <br />
! style="border-bottom:1px solid black;" | Using CMD <br />
! style="border-bottom:1px solid black;" | KVM/XEN? <br />
! style="border-bottom:1px solid black;" | Supporting fedcloud.egi.eu <br />
! style="border-bottom:1px solid black;" | Resource Endpoints <br />
! style="border-bottom:1px solid black;" | Committed Production Launch resources <br />
! style="border-bottom:1px solid black;" | VM maximum size (memory, cpu, storage) <br />
! style="border-bottom:1px solid black;" | IPv6 readiness of the network at the site <br />
! style="border-bottom:1px solid black;" | Pay-per-use <br />
! style="border-bottom:1px solid black;" | Last update timestamp YYYY-MM-DD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
100 Percent IT Ltd <br />
<br />
| style="border-bottom:1px dotted silver;" | UK <br />
| style="border-bottom:1px dotted silver;" | David Blundell <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | 100IT <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka&nbsp; <br />
| style="border-bottom:1px dotted silver;" | Queens on Ubuntu18.04 <br />
| style="border-bottom:1px dotted silver;" | Planned with Queens release <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://occi-api.100percentit.com:8787/occi1.1 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 120 Cores with 128 GB RAM <br />
<br />
- 16TB Shared storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 cores, 32 GB of RAM, up to 1TB attachable block storage<br> <br />
| style="border-bottom:1px dotted silver;" | Network ready, integration planned for Queens release <br />
| style="border-bottom:1px dotted silver;" | Yes <br> [https://wiki.egi.eu/wiki/Pay-for-use Price] <br />
| style="border-bottom:1px dotted silver;" | 2017-12-04<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Cyfronet (NGI PL) <br />
| style="border-bottom:1px dotted silver;" | PL <br />
| style="border-bottom:1px dotted silver;" | Tomasz Szepieniec <br />
| style="border-bottom:1px dotted silver;" | Patryk Lasoń, Łukasz Flis <br />
| style="border-bottom:1px dotted silver;" | [https://next.gocdb.eu/portal/index.php?Page_Type=Site&id=966 CYFRONET-CLOUD] <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Juno <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://control.cloud.cyfronet.pl:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 200 Cores with 400 GB RAM <br />
<br />
- 5 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 cores, 16 GB of RAM, 10 GB <br />
| style="border-bottom:1px dotted silver;" | February 2018 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2017-12-01<br />
|-<br />
| style="border-bottom:1px dotted silver;" | FCTSG <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Rubén Díez <br />
| style="border-bottom:1px dotted silver;" | Iván Díaz <br />
| style="border-bottom:1px dotted silver;" | CESGA <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5.2.1 <br />
| style="border-bottom:1px dotted silver;" | No in near future. Current version is compatible with CMD.<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
Yes (CMD-ONE-1) <br />
<br />
Except occi-server, which is 2.0.0.alpha.1 <br />
<br />
| style="border-bottom:1px dotted silver;" | KVM<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://cloud.cesga.es:3202/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- <span class="right" id="dashboard_host_allocated_cpu_str">192 Cores total<br />
</span> <br />
<br />
<span class="right">- </span><span class="right" id="dashboard_host_allocated_mem_str">340 GB of RAM total<br />
</span><br> <br />
<br />
<span class="right">- </span>Two data stores of 3TB and 700GB. <br />
<br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 cores, 16GB of RAM, local HD 100 GB <br />
| style="border-bottom:1px dotted silver;" | &nbsp; Ready (network level) <br />
| style="border-bottom:1px dotted silver;" | Yes <br> [https://wiki.egi.eu/wiki/Pay-for-use Price] <br />
| style="border-bottom:1px dotted silver;" | 2018-04-23<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CESNET <br />
| style="border-bottom:1px dotted silver;" | CZ <br />
| style="border-bottom:1px dotted silver;" | Miroslav Ruda <br />
| style="border-bottom:1px dotted silver;" | Boris Parak <br />
| style="border-bottom:1px dotted silver;" | CESNET-MetaCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4.14.2 <br />
| style="border-bottom:1px dotted silver;" | Migration to OpenStack Queens/Rocky with OpenID Connect authN/authZ. ETA Q1/2019. <br />
| style="border-bottom:1px dotted silver;" | NO <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://carach5.ics.muni.cz:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 416 Cores with 2.4 TB RAM <br />
<br />
- 56.6 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 32 cores, 185 GB of RAM, approx. 3 TB of attached storage <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-05-31<br />
|-<br />
| style="border-bottom:1px dotted silver;" | GWDG <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Piotr Kasprzak <br />
| style="border-bottom:1px dotted silver;" | Ramin Yahyapour, Philipp Wieder <br />
| style="border-bottom:1px dotted silver;" | GoeGrid <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4 <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | NO <br />
| style="border-bottom:1px dotted silver;" | [https://occi.cloud.gwdg.de:3100/ OCCI]<br>[http://cdmi.cloud.gwdg.de:4001 CDMI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 192 Cores with 768 GB RAM <br />
<br />
&nbsp;- 40 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 64 cores, 240 GB RAM, 3 TB disk <br />
| style="border-bottom:1px dotted silver;" | Not ready (planned) <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-04-23<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CSIC/IFCA <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Alvaro Lopez Garcia <br />
| style="border-bottom:1px dotted silver;" | Pablo Orviz <br />
| style="border-bottom:1px dotted silver;" | IFCA-LCG2 <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Newton/Ocata/Pike/Queens <br />
| style="border-bottom:1px dotted silver;" | Upgrade to Ubuntu 18.04 and Pike during Q3 2018<br> <br />
| style="border-bottom:1px dotted silver;" | No<br> <br />
| style="border-bottom:1px dotted silver;" | KVM<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | <br />
[https://cloud.ifca.es:8787/ OCCI] <br />
<br />
[https://cloud.ifca.es:8774/ OpenStack] <br />
<br />
[https://cephrgw.ifca.es:8080/ Swift] <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
- 7744 Cores: <br />
<br />
&nbsp;&nbsp;&nbsp; - 32 nodes x 8 vcpus x 16GB RAM <br />
<br />
&nbsp;&nbsp;&nbsp; - 36 nodes x 24 vcpus x 48GM RAM <br />
<br />
&nbsp;&nbsp;&nbsp; - 34 nodes x 32 vcpus x 128GB RAM <br />
<br />
&nbsp;&nbsp;&nbsp; - 2 nodes x 80 vcpus x 1TB RAM<br> <br />
<br />
&nbsp;&nbsp;&nbsp; - 24 nodes x 32 vcpus x 32GB RAM <br />
<br />
&nbsp;&nbsp;&nbsp; - 144 nodes x 32 vcpus x 32GB RAM <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
cm4.4xlarge: 32 vCPUs, 160GB HD, 64GB RAM <br />
<br />
x1.20xlarge: 80 vCPUS, 100GB HD, 1TB RAM (upon request) <br />
<br />
GPU and Infiniband access (upon request)<br />
<br />
| style="border-bottom:1px dotted silver;" | Not ready (planned) <br />
| style="border-bottom:1px dotted silver;" | Yes <br> [https://wiki.egi.eu/wiki/Pay-for-use Price] <br />
| style="border-bottom:1px dotted silver;" | 2018-06-18<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
GRNET<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | GR <br />
| style="border-bottom:1px dotted silver;" | Kostas Koumantaros <br />
| style="border-bottom:1px dotted silver;" | &nbsp; Kyriakos Ginis <br />
| style="border-bottom:1px dotted silver;" | HG-09-Okeanos-Cloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | Synnefo <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://okeanos-occi2.hellasgrid.gr:9000/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 70 CPUs with 220 GB RAM <br />
<br />
<br> - 2 ΤΒ storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 24 Cores, 192GB RAM, 500GB HD&nbsp; <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Miroslav Dobrucky, Jan Astalos <br />
| style="border-bottom:1px dotted silver;" | IISAS-FedCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | Queens on Ubuntu18.04 <br />
| style="border-bottom:1px dotted silver;" | Planned with Queens release <br />
| style="border-bottom:1px dotted silver;" | KVM and LXD <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nova.ui.savba.sk:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 168 Cores with 1.5 GB RAM per core <br />
<br />
&nbsp;- 9 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.xlarge: 8 VCPUs, 16GB RAM, 160GB HD <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2017-11-30<br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Jan Astalos, Miroslav Dobrucky <br />
| style="border-bottom:1px dotted silver;" | IISAS-GPUCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | Queens on Ubuntu18.04 <br />
| style="border-bottom:1px dotted silver;" | Planned with Queens release <br />
| style="border-bottom:1px dotted silver;" | KVM and LXD <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nova3.ui.savba.sk:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 48 Cores with 4GB RAM per core, 6 GPUs K20m <br />
<br />
&nbsp;- 6 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <span style="color: rgb(51, 51, 51); font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 13px; line-height: 18.5714px; background-color: rgb(245, 245, 245);">gpu2cpu12</span><span style="font-size: 13.28px; line-height: 1.5em;">: 12 VCPUs, 48GB RAM, 200GB HD, 2 GPU Tesla K20m</span> <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | Yes <br> [https://wiki.egi.eu/wiki/Pay-for-use Price] <br />
| style="border-bottom:1px dotted silver;" | 2017-11-30<br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - Catania <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Roberto Barbera <br />
| style="border-bottom:1px dotted silver;" | Giuseppe Platania <br />
| style="border-bottom:1px dotted silver;" | INFN-CATANIA-STACK <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://stack-server-01.ct.infn.it:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 16 Cores with 65 GB RAM - 16 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | VM maximum size = m1.xlarge (16384 MB, 8 VCPU, 160 GB) <br />
| Ready <br />
| No<br />
| 2018-06-25<br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - Bari <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Marica Antonacci <br />
| style="border-bottom:1px dotted silver;" | Giacinto Donvito, Stefano Nicotri, Vincenzo Spinoso <br />
| style="border-bottom:1px dotted silver;" | RECAS-BARI (was:PRISMA-INFN-BARI) <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [http://cloud.recas.ba.infn.it:8787/occi OCCI]<br>[http://cloud.recas.ba.infn.it:8774/v2.1/%(tenant_id)s Openstack Compute]<br>[http://cloud.recas.ba.infn.it:8080/v1/AUTH_%(tenant_id)s Openstack Object-Store]<br>[http://egi-cloud.recas.ba.infn.it/ Horizon dashboard with auth token] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 300 Cores with 600GB of RAM <br />
<br />
- 50 TB Storage. <br />
<br />
| style="border-bottom:1px dotted silver;" | Flavor "m1.xxlarge": 24 cores, 48GB RAM, 100 GB disk. Up to 500GB block storage can be attached on-demand. <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Yes <br> [https://wiki.egi.eu/wiki/Pay-for-use Price] <br />
| style="border-bottom:1px dotted silver;" | 2017-12-14<br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN-Padova <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Marco Verlato <br />
| style="border-bottom:1px dotted silver;" | Federica Fanzago<br />
| style="border-bottom:1px dotted silver;" | [https://next.gocdb.eu/portal/index.php?Page_Type=Site&id=1024 INFN-PADOVA-STACK] <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Newton <br />
| style="border-bottom:1px dotted silver;" | Upgrade to Queens in Q4<br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | KVM<br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://egi-cloud.pd.infn.it:8787/occi1.1/ OCCI]<br>[https://egi-cloud.pd.infn.it/dashboard Horizon dashboard with West-Life/WeNMR SSO and INDIGO-IAM]<br />
| style="border-bottom:1px dotted silver;" | <br />
- 120 Cores with 240 GB RAM <br />
<br />
- 2.2TB of overall block storage and 1.8TB of ephemeral storage per compute node <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
m1.hpc: 24 VCPUs, 46GB RAM, 160GB HD, up to 1TB attachable block storage <br />
<br />
Up to 24 public IPs <br />
<br />
| style="border-bottom:1px dotted silver;" | Not ready (planned) <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-07-02<br />
|-<br />
| style="border-bottom:1px dotted silver;" | TUBITAK ULAKBIM <br />
| style="border-bottom:1px dotted silver;" | TR <br />
| style="border-bottom:1px dotted silver;" | Feyza Eryol <br />
| style="border-bottom:1px dotted silver;" | Onur Temizsoylu <br />
| style="border-bottom:1px dotted silver;" | TR-FC1-ULAKBIM <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | NO<br> <br />
| style="border-bottom:1px dotted silver;" | KVM<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [http://fcctrl.ulakbim.gov.tr:8787/occi1.2 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 744 Cores with 3968 GB RAM&nbsp; <br />
<br />
- 40 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 4 CPUs, 16GB Memory, 40GB disk<br> <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | Yes <br> [https://wiki.egi.eu/wiki/Pay-for-use Price] <br />
| style="border-bottom:1px dotted silver;" | 2018-06-20<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | IPHC <br />
| style="border-bottom:1px dotted silver;" | FR <br />
| style="border-bottom:1px dotted silver;" | Jerome Pansanel <br />
| style="border-bottom:1px dotted silver;" | Sebastien Geiger <br />
| style="border-bottom:1px dotted silver;" | IN2P3-IRES <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Pike<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://sbgcloud.in2p3.fr:8787 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 480 Cores with 2336 GB RAM - 480 TB storage (Cinder / CEPH) <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.12xlarge-hugemem (CPU: 48, RAM: 512 GB, disk: 160 GB) <br> Monitoring&nbsp;: <br />
*https://cloudmon.egi.eu/nagios/cgi-bin/extinfo.cgi?type=1&amp;host=sbgbdii1.in2p3.fr <br />
*https://cloudmon.egi.eu/nagios/cgi-bin/extinfo.cgi?type=1&amp;host=sbgcloud.in2p3.fr<br />
<br />
| style="border-bottom:1px dotted silver;" | Planned for October 2018 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-05-31<br />
|-<br />
| style="border-bottom:1px dotted silver;" | NCG-INGRID-PT / LIP <br />
| style="border-bottom:1px dotted silver;" | PT <br />
| style="border-bottom:1px dotted silver;" | Mario David <br />
| style="border-bottom:1px dotted silver;" | Joao Pina, Joao Martins <br />
| style="border-bottom:1px dotted silver;" | NCG-INGRID-PT <br />
| style="border-bottom:1px dotted silver;" | <br />
Candidate <br />
<br />
May 2018 <br />
<br />
| style="border-bottom:1px dotted silver;" | Openstack Mitaka<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
Queens or "R", Ubuntu18.04 <br />
<br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | KVM and lxd[[]] <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [http://nimbus.ncg.ingrid.pt:8774/v2.1/%(tenant_id)s Openstack Compute]<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
- 80 Cores with 192 GB RAM <br />
<br />
- 3 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | Name m1.xlarge, VCPUs 8, Root Disk 160 GB, Ephemeral Disk 0 GB, Total Disk 160 GB, RAM 16,384 MB <br />
| style="border-bottom:1px dotted silver;" | Planned for second half of 2018 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 31.05.2018<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Polythecnic University of Valencia, I3M <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Ignacio Blanquer <br />
| style="border-bottom:1px dotted silver;" | Carlos de Alfonso <br />
| style="border-bottom:1px dotted silver;" | UPV-GRyCAP <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5 <br />
| style="border-bottom:1px dotted silver;" | Not planned yet. <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | NO <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://fc-one.i3m.upv.es:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 128 Cores with 192 GB RAM <br />
<br />
- 5 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 4 VCPUS, 16GB RAM, 160GB HD <br />
| style="border-bottom:1px dotted silver;" | By 2018 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-05-31<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Fraunhofer SCAI <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Horst Schwichtenberg <br />
| style="border-bottom:1px dotted silver;" | Andre Gemuend <br />
| style="border-bottom:1px dotted silver;" | SCAI <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | Upgrade to OpenStack Pike or Queens. Estimated upgrade Q4 2018.<br> <br />
| style="border-bottom:1px dotted silver;" | YES<br> <br />
| style="border-bottom:1px dotted silver;" | KVM<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://fc.scai.fraunhofer.de:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 128 physical cores + HT, 244 GB RAM <br />
<br />
- 20 TB Storage (Glance &amp; Cinder) <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 VCPUS, 16GB RAM, 160GB HD <br />
| style="border-bottom:1px dotted silver;" | Planned for Q4 2018 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 08.06.2018<br />
|-<br />
| style="border-bottom:1px dotted silver;" | BELSPO <br />
| style="border-bottom:1px dotted silver;" | BE<br> <br />
| style="border-bottom:1px dotted silver;" | Stephane GERARD <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | BEgrid-BELNET <br />
| style="border-bottom:1px dotted silver;" | Certified (2016-12) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5.4 <br />
| style="border-bottom:1px dotted silver;" | No OpenNebula upgrade planned. We seriously think about moving to OpenStack, but not for this year.<br> <br />
| style="border-bottom:1px dotted silver;" | Yes<br> <br />
| style="border-bottom:1px dotted silver;" | KVM<br> <br />
| style="border-bottom:1px dotted silver;" | YES<br> <br />
| style="border-bottom:1px dotted silver;" | [https://rocci.iihe.ac.be:11443/ OCCI]<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
- 160 physical cores + HT, 512 GB RAM <br />
<br />
- 10 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 16 VCPUs, 32GB of RAM, local HD 200 GB <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2017-12-11<br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Jan Astalos, Miroslav Dobrucky <br />
| style="border-bottom:1px dotted silver;" | IISAS-Nebula <br />
| style="border-bottom:1px dotted silver;" | Certified (2017-01-25) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5 <br />
| style="border-bottom:1px dotted silver;" | No plans.<br> <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | CMD-ONE-1 except occi-server (occi-server version installed is 2.0.0.alpha.1 which is needed for GPU support) <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | [https://nebula2.ui.savba.sk:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | 32 Cores with 4GB RAM per core and 4 GPUs K20m - Storage 147GB <br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 14 CPU cores, 2GPU, 56GB of RAM, 830GB HD <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2017-11-30<br />
|-<br />
| style="border-bottom:1px dotted silver;" | IFIN-HH <br />
| style="border-bottom:1px dotted silver;" | RO <br />
| style="border-bottom:1px dotted silver;" | Ionut Vasile <br />
| style="border-bottom:1px dotted silver;" | Dragos Ciobanu-Zabet <br />
| style="border-bottom:1px dotted silver;" | CLOUDIFIN <br />
| style="border-bottom:1px dotted silver;" | Certified (2017-03-01) <br />
| style="border-bottom:1px dotted silver;" | OpenStack Newton <br />
| style="border-bottom:1px dotted silver;" | Upgrade to OpenStack Pike or Queens, whichever is supported at the time of the upgrade. Estimated upgrade Q4 2018. <br />
| style="border-bottom:1px dotted silver;" | yes <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | Yes<br> <br />
| style="border-bottom:1px dotted silver;" | [http://cloudctrl.nipne.ro:8787/occi1.1 OCCI] <br />
| style="border-bottom:1px dotted silver;" | 96 Cores with 384 GB RAM <br>- 2TB Storage <br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.large(VCPUs 4, 8GB RAM, Root Disk 80GB)<br> m1.medium(VCPUs 2, 4GB RAM, Root Disk 40GB)<br> m1.nano(VCPUs 1, 64MB RAM)<br> m1.small(VCPUs 1, 2GB RAM, Root Disk 20GB)<br> m1.tiny(VCPUs 1, 512MB RAM, Root Disk 1GB)<br> m1.xlarge(VCPUs 8, 16GB RAM, Root Disk 160GB) <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-05-31<br />
|-<br />
| style="border-bottom:1px dotted silver;" | BITP <br />
| style="border-bottom:1px dotted silver;" | UA <br />
| style="border-bottom:1px dotted silver;" | Volodymyr Yurchenko <br />
| style="border-bottom:1px dotted silver;" | Volodymyr Yurchenko <br />
| style="border-bottom:1px dotted silver;" | UA-BITP <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Newton <br />
| style="border-bottom:1px dotted silver;" | Upgrade to OpenStack Queens during July-August 2018 (not possible until now due to quite intensive resource usage); a downtime will be needed <br />
| style="border-bottom:1px dotted silver;" | Yes<br> <br />
| style="border-bottom:1px dotted silver;" | KVM<br> <br />
| style="border-bottom:1px dotted silver;" | Yes<br> <br />
| style="border-bottom:1px dotted silver;" | [http://cloud-main.bitp.kiev.ua:8787 OCCI] <br> [https://cloud-main.bitp.kiev.ua:5001/v2.0 OpenStack] <br />
| style="border-bottom:1px dotted silver;" | Total 56 virtual cores (28 physical CPUs), 62 GB RAM, 2.9 TB storage, <br />
all available for fedcloud.egi.eu through an SLA <br> <br />
<br />
| style="border-bottom:1px dotted silver;" | Max size of VM: 8 cores, 16 GB RAM, 160 GB storage <br />
| style="border-bottom:1px dotted silver;" | Planned for July-August 2018 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-06-18-12-11<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | INAF-TRIESTE-STACK <br />
| style="border-bottom:1px dotted silver;" | Candidate (13-11-2017) <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Yes/No <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|}<br />
<br />
== Suspended or closed sites ==<br />
<br />
'''Last update: November 2017''' <br />
<br />
{| cellspacing="0" cellpadding="5" style="border:1px solid black; text-align:left;" class="wikitable sortable"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | Affiliation <br />
! style="border-bottom:1px solid black;" | CC <br />
! style="border-bottom:1px solid black;" | Main contact <br />
! style="border-bottom:1px solid black;" | Deputies <br />
! style="border-bottom:1px solid black;" | Host Site <br />
! style="border-bottom:1px solid black;" | Production status <br />
! style="border-bottom:1px solid black;" | CMF Version <br />
! style="border-bottom:1px solid black;" | Supporting fedcloud.egi.eu <br />
! style="border-bottom:1px solid black;" | Resource Endpoints <br />
! style="border-bottom:1px solid black;" | Committed Production Launch resources <br />
! style="border-bottom:1px solid black;" | VM maximum size (memory, cpu, storage) <br />
! style="border-bottom:1px solid black;" | IPv6 readiness of the network at the site <br />
! style="border-bottom:1px solid black;" | Last update timestamp YYY-MM-DD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
UNIZAR / BIFI <br />
<br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Ruben Valles <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | BIFI <br />
| style="border-bottom:1px dotted silver;" | Suspended (2017-08-01) <br />
| style="border-bottom:1px dotted silver;" | OpenStack (Newton / Icehouse) <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | <br />
[http://server4-ciencias.bifi.unizar.es:8787 OCCI] (Newton)<br> [http://server4-eupt.unizar.es:8787 OCCI] (Icehouse) <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
- 720 Cores with 740 GB RAM <br />
<br />
- 10 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
'''Newton''': m1.xlarge, VCPUs 8, Root Disk 20 GB, Ephemeral Disk 0 GB, Total Disk 20 GB, RAM 16,384 MB '''Icehouse''': Name m1.xlarge, VCPUs 8, Root Disk 160 GB, Ephemeral Disk 0 GB, Total Disk 160 GB, RAM 16,384 MB <br />
<br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" rowspan="3" | INFN - Catania <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | IT <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | Roberto Barbera <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | Giuseppe La Rocca, Diego Scardac, Giuseppe Platania <br />
| style="border-bottom:1px dotted silver;" | INFN-CATANIA-NEBULA <br />
| style="border-bottom:1px dotted silver;" | Suspended (2017-04-27) https://ggus.eu/index.php?mode=ticket_info&amp;ticket_id=126716 <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nebula-server-01.ct.infn.it:9000/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 16 cores, 64 GB RAM <br />
<br />
- 5.4 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | INDIGO-CATANIA-STACK <br />
| style="border-bottom:1px dotted silver;" | Closed (2017-05-02) <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | CETA-CIEMAT <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Guillermo Díaz Herrero, <br />
| style="border-bottom:1px dotted silver;" | <br />
Alfonso Pardo Diaz <br />
<br />
Abel Francisco Paz<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | CETA-GRID <br />
| style="border-bottom:1px dotted silver;" | Cloud resources dismissed (2017-06-09) <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | FZ Jülich <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Björn Hagemeier <br />
| style="border-bottom:1px dotted silver;" | Shahbaz Memon <br />
| style="border-bottom:1px dotted silver;" | FZJ <br />
| style="border-bottom:1px dotted silver;" | Cloud resources dismissed (Dec 2017)<br />
|-<br />
| style="border-bottom:1px dotted silver;" | UKIM <br />
| style="border-bottom:1px dotted silver;" | MK <br />
| style="border-bottom:1px dotted silver;" | Boro Jakimovski <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | MK-04-FINKICLOUD <br />
| style="border-bottom:1px dotted silver;" | Suspended (2017-10-12) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5.2 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nebula.finki.ukim.mk:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 100 Cores with 24 GB RAM - 1 TB Storage <br />
<br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 4 CPUs, 8GB Memory, 200GB disk<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|}<br />
<br />
[[Category:Federated_Cloud]]</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Federated_Cloud_infrastructure_status&diff=99626
Federated Cloud infrastructure status
2018-07-02T09:15:46Z
<p>Verlato: </p>
<hr />
<div>{{Fedcloud_Menu}} {{TOC_right}} <br />
<br />
The purposes of this page are <br />
<br />
*providing a snapshot of the resources that are provided by the Federated Cloud infrastructure <br />
*providing information about the sites that are joining, or have expressed interest in joining the FedCloud <br />
*providing the list of sites supporting the fedcloud.egi.eu VO, which is the VO used to allow the evaluation of the FedCloud infrastructure by a given provider<br />
<br />
If you have any comments on the content of this page, please contact '''operations @ egi.eu'''. <br />
<br />
== CMF support highlights ==<br />
<br />
'''Last update: April 2018''' <br />
<br />
{| cellspacing="1" cellpadding="1" width="1553" border="1"<br />
|-<br />
! scope="col" | CMF <br />
! scope="col" | Version <br />
! scope="col" | Comments<br />
|-<br />
| OpenStack <br />
| bgcolor="#66ff99" align="center" | &gt;=Ocata <br />
| here https://releases.openstack.org/ you find that Ocata and above are "Maintained", meaning "Approximately 18 months" supported since the release date. For instance, for Ocata, released on 2017-02-22, this means approximately until August 2018<br />
|-<br />
| OpenStack <br />
| bgcolor="#66ff99" align="center" | Mitaka/Ubuntu LTS <br />
| some RCs are also using Mitaka from Ubuntu LTS, which is supported for 5 years. Due to project constraints, some of these could decide to switch to a new version, or stay with the current Mitaka/Ubuntu set up as planned last year; so '''we will ask about plans once again, to confirm they will stay with Mitaka or otherwise'''<br />
|-<br />
| OpenStack <br />
| bgcolor="#ff6666" align="center" | &lt;Ocata other than Mitaka/Ubuntu LTS <br />
| there are situations where an outdated version of OpenStack is used. '''These RCs are violating the [https://documents.egi.eu/public/ShowDocument?docid=669 Service Operations Security Policy] which states that resource centres SHOULD NOT run unsupported software in their production infrastructure. We will open tickets against those sites to plan an upgrade'''<br />
|-<br />
| OpenNebula <br />
| bgcolor="#ff6666" align="center" | 4 <br />
| no longer supported, OpenNebula RCs need to update to OpenNebula 5. '''These RCs are violating the the [https://documents.egi.eu/public/ShowDocument?docid=669 Service Operations Security Policy] which states that resource centres SHOULD NOT run unsupported software in their production infrastructure. We will open tickets against those sites to plan an upgrade'''<br />
|-<br />
| OpenNebula <br />
| bgcolor="#66ff99" align="center" | 5 <br />
| Supported<br />
|-<br />
| Synnefo <br />
| bgcolor="#66ff99" align="center" | <br> <br />
| Supported<br />
|}<br />
<br />
== Status of the Federated Cloud ==<br />
<br />
The table here shows all Resource Centres fully integrated into the Federated Cloud infrastructure and certified through the [[PROC09|EGI Resource Centre Registration and Certification]]. <br />
<br />
The status of all the services is monitored via the [https://argo-mon.egi.eu/nagios/ argo-mon.egi.eu nagios instance] and [https://argo-mon2.egi.eu/nagios/ argo-mon2.egi.eu nagios instance]. <br />
<br />
Details on Resource Centres availability and reliability are available on [http://argo.egi.eu/lavoisier/cloud_reports?accept=html ARGO]. <br />
<br />
Accounting data are available on the [http://accounting.egi.eu/cloud.php EGI Accounting Portal] or in the [http://accounting-devel.egi.eu/cloud.php accounting portal dev instance]. <br />
<br />
'''Last update: December 2017 - UPDATE&nbsp;IN&nbsp;PROGRESS http://go.egi.eu/fedcloudstatus''' <br />
<br />
{| style="border:1px solid black; text-align:left;" class="wikitable sortable" cellspacing="0" cellpadding="5"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | Affiliation <br />
! style="border-bottom:1px solid black;" | CC <br />
! style="border-bottom:1px solid black;" | Main contact <br />
! style="border-bottom:1px solid black;" | Deputies <br />
! style="border-bottom:1px solid black;" | Host Site <br />
! style="border-bottom:1px solid black;" | Production status <br />
! style="border-bottom:1px solid black;" | CMF Version <br />
! style="border-bottom:1px solid black;" | CMF Upgrade plans <br />
! style="border-bottom:1px solid black;" | Using CMD <br />
! style="border-bottom:1px solid black;" | KVM/XEN? <br />
! style="border-bottom:1px solid black;" | Supporting fedcloud.egi.eu <br />
! style="border-bottom:1px solid black;" | Resource Endpoints <br />
! style="border-bottom:1px solid black;" | Committed Production Launch resources <br />
! style="border-bottom:1px solid black;" | VM maximum size (memory, cpu, storage) <br />
! style="border-bottom:1px solid black;" | IPv6 readiness of the network at the site <br />
! style="border-bottom:1px solid black;" | Pay-per-use <br />
! style="border-bottom:1px solid black;" | Last update timestamp YYYY-MM-DD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
100 Percent IT Ltd <br />
<br />
| style="border-bottom:1px dotted silver;" | UK <br />
| style="border-bottom:1px dotted silver;" | David Blundell <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | 100IT <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka&nbsp; <br />
| style="border-bottom:1px dotted silver;" | Queens on Ubuntu18.04 <br />
| style="border-bottom:1px dotted silver;" | Planned with Queens release <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://occi-api.100percentit.com:8787/occi1.1 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 120 Cores with 128 GB RAM <br />
<br />
- 16TB Shared storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 cores, 32 GB of RAM, up to 1TB attachable block storage<br> <br />
| style="border-bottom:1px dotted silver;" | Network ready, integration planned for Queens release <br />
| style="border-bottom:1px dotted silver;" | Yes <br> [https://wiki.egi.eu/wiki/Pay-for-use Price] <br />
| style="border-bottom:1px dotted silver;" | 2017-12-04<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Cyfronet (NGI PL) <br />
| style="border-bottom:1px dotted silver;" | PL <br />
| style="border-bottom:1px dotted silver;" | Tomasz Szepieniec <br />
| style="border-bottom:1px dotted silver;" | Patryk Lasoń, Łukasz Flis <br />
| style="border-bottom:1px dotted silver;" | [https://next.gocdb.eu/portal/index.php?Page_Type=Site&id=966 CYFRONET-CLOUD] <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Juno <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://control.cloud.cyfronet.pl:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 200 Cores with 400 GB RAM <br />
<br />
- 5 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 cores, 16 GB of RAM, 10 GB <br />
| style="border-bottom:1px dotted silver;" | February 2018 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2017-12-01<br />
|-<br />
| style="border-bottom:1px dotted silver;" | FCTSG <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Rubén Díez <br />
| style="border-bottom:1px dotted silver;" | Iván Díaz <br />
| style="border-bottom:1px dotted silver;" | CESGA <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5.2.1 <br />
| style="border-bottom:1px dotted silver;" | No in near future. Current version is compatible with CMD.<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
Yes (CMD-ONE-1) <br />
<br />
Except occi-server, which is 2.0.0.alpha.1 <br />
<br />
| style="border-bottom:1px dotted silver;" | KVM<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://cloud.cesga.es:3202/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- <span class="right" id="dashboard_host_allocated_cpu_str">192 Cores total<br />
</span> <br />
<br />
<span class="right">- </span><span class="right" id="dashboard_host_allocated_mem_str">340 GB of RAM total<br />
</span><br> <br />
<br />
<span class="right">- </span>Two data stores of 3TB and 700GB. <br />
<br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 cores, 16GB of RAM, local HD 100 GB <br />
| style="border-bottom:1px dotted silver;" | &nbsp; Ready (network level) <br />
| style="border-bottom:1px dotted silver;" | Yes <br> [https://wiki.egi.eu/wiki/Pay-for-use Price] <br />
| style="border-bottom:1px dotted silver;" | 2018-04-23<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CESNET <br />
| style="border-bottom:1px dotted silver;" | CZ <br />
| style="border-bottom:1px dotted silver;" | Miroslav Ruda <br />
| style="border-bottom:1px dotted silver;" | Boris Parak <br />
| style="border-bottom:1px dotted silver;" | CESNET-MetaCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4.14.2 <br />
| style="border-bottom:1px dotted silver;" | Migration to OpenStack Queens/Rocky with OpenID Connect authN/authZ. ETA Q1/2019. <br />
| style="border-bottom:1px dotted silver;" | NO <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://carach5.ics.muni.cz:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 416 Cores with 2.4 TB RAM <br />
<br />
- 56.6 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 32 cores, 185 GB of RAM, approx. 3 TB of attached storage <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-05-31<br />
|-<br />
| style="border-bottom:1px dotted silver;" | GWDG <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Piotr Kasprzak <br />
| style="border-bottom:1px dotted silver;" | Ramin Yahyapour, Philipp Wieder <br />
| style="border-bottom:1px dotted silver;" | GoeGrid <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4 <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | NO <br />
| style="border-bottom:1px dotted silver;" | [https://occi.cloud.gwdg.de:3100/ OCCI]<br>[http://cdmi.cloud.gwdg.de:4001 CDMI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 192 Cores with 768 GB RAM <br />
<br />
&nbsp;- 40 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 64 cores, 240 GB RAM, 3 TB disk <br />
| style="border-bottom:1px dotted silver;" | Not ready (planned) <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-04-23<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CSIC/IFCA <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Alvaro Lopez Garcia <br />
| style="border-bottom:1px dotted silver;" | Pablo Orviz <br />
| style="border-bottom:1px dotted silver;" | IFCA-LCG2 <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Newton/Ocata/Pike/Queens <br />
| style="border-bottom:1px dotted silver;" | Upgrade to Ubuntu 18.04 and Pike during Q3 2018<br> <br />
| style="border-bottom:1px dotted silver;" | No<br> <br />
| style="border-bottom:1px dotted silver;" | KVM<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | <br />
[https://cloud.ifca.es:8787/ OCCI] <br />
<br />
[https://cloud.ifca.es:8774/ OpenStack] <br />
<br />
[https://cephrgw.ifca.es:8080/ Swift] <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
- 7744 Cores: <br />
<br />
&nbsp;&nbsp;&nbsp; - 32 nodes x 8 vcpus x 16GB RAM <br />
<br />
&nbsp;&nbsp;&nbsp; - 36 nodes x 24 vcpus x 48GM RAM <br />
<br />
&nbsp;&nbsp;&nbsp; - 34 nodes x 32 vcpus x 128GB RAM <br />
<br />
&nbsp;&nbsp;&nbsp; - 2 nodes x 80 vcpus x 1TB RAM<br> <br />
<br />
&nbsp;&nbsp;&nbsp; - 24 nodes x 32 vcpus x 32GB RAM <br />
<br />
&nbsp;&nbsp;&nbsp; - 144 nodes x 32 vcpus x 32GB RAM <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
cm4.4xlarge: 32 vCPUs, 160GB HD, 64GB RAM <br />
<br />
x1.20xlarge: 80 vCPUS, 100GB HD, 1TB RAM (upon request) <br />
<br />
GPU and Infiniband access (upon request)<br />
<br />
| style="border-bottom:1px dotted silver;" | Not ready (planned) <br />
| style="border-bottom:1px dotted silver;" | Yes <br> [https://wiki.egi.eu/wiki/Pay-for-use Price] <br />
| style="border-bottom:1px dotted silver;" | 2018-06-18<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
GRNET<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | GR <br />
| style="border-bottom:1px dotted silver;" | Kostas Koumantaros <br />
| style="border-bottom:1px dotted silver;" | &nbsp; Kyriakos Ginis <br />
| style="border-bottom:1px dotted silver;" | HG-09-Okeanos-Cloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | Synnefo <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://okeanos-occi2.hellasgrid.gr:9000/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 70 CPUs with 220 GB RAM <br />
<br />
<br> - 2 ΤΒ storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 24 Cores, 192GB RAM, 500GB HD&nbsp; <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Miroslav Dobrucky, Jan Astalos <br />
| style="border-bottom:1px dotted silver;" | IISAS-FedCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | Queens on Ubuntu18.04 <br />
| style="border-bottom:1px dotted silver;" | Planned with Queens release <br />
| style="border-bottom:1px dotted silver;" | KVM and LXD <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nova.ui.savba.sk:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 168 Cores with 1.5 GB RAM per core <br />
<br />
&nbsp;- 9 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.xlarge: 8 VCPUs, 16GB RAM, 160GB HD <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2017-11-30<br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Jan Astalos, Miroslav Dobrucky <br />
| style="border-bottom:1px dotted silver;" | IISAS-GPUCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | Queens on Ubuntu18.04 <br />
| style="border-bottom:1px dotted silver;" | Planned with Queens release <br />
| style="border-bottom:1px dotted silver;" | KVM and LXD <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nova3.ui.savba.sk:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 48 Cores with 4GB RAM per core, 6 GPUs K20m <br />
<br />
&nbsp;- 6 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <span style="color: rgb(51, 51, 51); font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 13px; line-height: 18.5714px; background-color: rgb(245, 245, 245);">gpu2cpu12</span><span style="font-size: 13.28px; line-height: 1.5em;">: 12 VCPUs, 48GB RAM, 200GB HD, 2 GPU Tesla K20m</span> <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | Yes <br> [https://wiki.egi.eu/wiki/Pay-for-use Price] <br />
| style="border-bottom:1px dotted silver;" | 2017-11-30<br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - Catania <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Roberto Barbera <br />
| style="border-bottom:1px dotted silver;" | Giuseppe Platania <br />
| style="border-bottom:1px dotted silver;" | INFN-CATANIA-STACK <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://stack-server-01.ct.infn.it:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 16 Cores with 65 GB RAM - 16 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | VM maximum size = m1.xlarge (16384 MB, 8 VCPU, 160 GB) <br />
| Ready <br />
| No<br />
| 2018-06-25<br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - Bari <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Marica Antonacci <br />
| style="border-bottom:1px dotted silver;" | Giacinto Donvito, Stefano Nicotri, Vincenzo Spinoso <br />
| style="border-bottom:1px dotted silver;" | RECAS-BARI (was:PRISMA-INFN-BARI) <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [http://cloud.recas.ba.infn.it:8787/occi OCCI]<br>[http://cloud.recas.ba.infn.it:8774/v2.1/%(tenant_id)s Openstack Compute]<br>[http://cloud.recas.ba.infn.it:8080/v1/AUTH_%(tenant_id)s Openstack Object-Store]<br>[http://egi-cloud.recas.ba.infn.it/ Horizon dashboard with auth token] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 300 Cores with 600GB of RAM <br />
<br />
- 50 TB Storage. <br />
<br />
| style="border-bottom:1px dotted silver;" | Flavor "m1.xxlarge": 24 cores, 48GB RAM, 100 GB disk. Up to 500GB block storage can be attached on-demand. <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Yes <br> [https://wiki.egi.eu/wiki/Pay-for-use Price] <br />
| style="border-bottom:1px dotted silver;" | 2017-12-14<br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN-Padova <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Marco Verlato <br />
| style="border-bottom:1px dotted silver;" | Federica Fanzago<br />
| style="border-bottom:1px dotted silver;" | [https://next.gocdb.eu/portal/index.php?Page_Type=Site&id=1024 INFN-PADOVA-STACK] <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Newton <br />
| style="border-bottom:1px dotted silver;" | Upgrade to Queens in Q4<br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | KVM<br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://egi-cloud.pd.infn.it:8787/occi1.1/ OCCI]<br>[https://egi-cloud.pd.infn.it/dashboard Horizon dashboard with West-Life/WeNMR SSO]<br />
| style="border-bottom:1px dotted silver;" | <br />
- 120 Cores with 240 GB RAM <br />
<br />
- 2.2TB of overall block storage and 1.8TB of ephemeral storage per compute node <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
m1.hpc: 24 VCPUs, 46GB RAM, 160GB HD, up to 1TB attachable block storage <br />
<br />
Up to 24 public IPs <br />
<br />
| style="border-bottom:1px dotted silver;" | Not ready (planned) <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-07-02<br />
|-<br />
| style="border-bottom:1px dotted silver;" | TUBITAK ULAKBIM <br />
| style="border-bottom:1px dotted silver;" | TR <br />
| style="border-bottom:1px dotted silver;" | Feyza Eryol <br />
| style="border-bottom:1px dotted silver;" | Onur Temizsoylu <br />
| style="border-bottom:1px dotted silver;" | TR-FC1-ULAKBIM <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | NO<br> <br />
| style="border-bottom:1px dotted silver;" | KVM<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [http://fcctrl.ulakbim.gov.tr:8787/occi1.2 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 744 Cores with 3968 GB RAM&nbsp; <br />
<br />
- 40 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 4 CPUs, 16GB Memory, 40GB disk<br> <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | Yes <br> [https://wiki.egi.eu/wiki/Pay-for-use Price] <br />
| style="border-bottom:1px dotted silver;" | 2018-06-20<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | IPHC <br />
| style="border-bottom:1px dotted silver;" | FR <br />
| style="border-bottom:1px dotted silver;" | Jerome Pansanel <br />
| style="border-bottom:1px dotted silver;" | Sebastien Geiger <br />
| style="border-bottom:1px dotted silver;" | IN2P3-IRES <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Pike<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://sbgcloud.in2p3.fr:8787 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 480 Cores with 2336 GB RAM - 480 TB storage (Cinder / CEPH) <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.12xlarge-hugemem (CPU: 48, RAM: 512 GB, disk: 160 GB) <br> Monitoring&nbsp;: <br />
*https://cloudmon.egi.eu/nagios/cgi-bin/extinfo.cgi?type=1&amp;host=sbgbdii1.in2p3.fr <br />
*https://cloudmon.egi.eu/nagios/cgi-bin/extinfo.cgi?type=1&amp;host=sbgcloud.in2p3.fr<br />
<br />
| style="border-bottom:1px dotted silver;" | Planned for October 2018 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-05-31<br />
|-<br />
| style="border-bottom:1px dotted silver;" | NCG-INGRID-PT / LIP <br />
| style="border-bottom:1px dotted silver;" | PT <br />
| style="border-bottom:1px dotted silver;" | Mario David <br />
| style="border-bottom:1px dotted silver;" | Joao Pina, Joao Martins <br />
| style="border-bottom:1px dotted silver;" | NCG-INGRID-PT <br />
| style="border-bottom:1px dotted silver;" | <br />
Candidate <br />
<br />
May 2018 <br />
<br />
| style="border-bottom:1px dotted silver;" | Openstack Mitaka<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
Queens or "R", Ubuntu18.04 <br />
<br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | KVM and lxd[[]] <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [http://nimbus.ncg.ingrid.pt:8774/v2.1/%(tenant_id)s Openstack Compute]<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
- 80 Cores with 192 GB RAM <br />
<br />
- 3 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | Name m1.xlarge, VCPUs 8, Root Disk 160 GB, Ephemeral Disk 0 GB, Total Disk 160 GB, RAM 16,384 MB <br />
| style="border-bottom:1px dotted silver;" | Planned for second half of 2018 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 31.05.2018<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Polythecnic University of Valencia, I3M <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Ignacio Blanquer <br />
| style="border-bottom:1px dotted silver;" | Carlos de Alfonso <br />
| style="border-bottom:1px dotted silver;" | UPV-GRyCAP <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5 <br />
| style="border-bottom:1px dotted silver;" | Not planned yet. <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | NO <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://fc-one.i3m.upv.es:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 128 Cores with 192 GB RAM <br />
<br />
- 5 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 4 VCPUS, 16GB RAM, 160GB HD <br />
| style="border-bottom:1px dotted silver;" | By 2018 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-05-31<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Fraunhofer SCAI <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Horst Schwichtenberg <br />
| style="border-bottom:1px dotted silver;" | Andre Gemuend <br />
| style="border-bottom:1px dotted silver;" | SCAI <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | Upgrade to OpenStack Pike or Queens. Estimated upgrade Q4 2018.<br> <br />
| style="border-bottom:1px dotted silver;" | YES<br> <br />
| style="border-bottom:1px dotted silver;" | KVM<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://fc.scai.fraunhofer.de:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 128 physical cores + HT, 244 GB RAM <br />
<br />
- 20 TB Storage (Glance &amp; Cinder) <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 VCPUS, 16GB RAM, 160GB HD <br />
| style="border-bottom:1px dotted silver;" | Planned for Q4 2018 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 08.06.2018<br />
|-<br />
| style="border-bottom:1px dotted silver;" | BELSPO <br />
| style="border-bottom:1px dotted silver;" | BE<br> <br />
| style="border-bottom:1px dotted silver;" | Stephane GERARD <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | BEgrid-BELNET <br />
| style="border-bottom:1px dotted silver;" | Certified (2016-12) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5.4 <br />
| style="border-bottom:1px dotted silver;" | No OpenNebula upgrade planned. We seriously think about moving to OpenStack, but not for this year.<br> <br />
| style="border-bottom:1px dotted silver;" | Yes<br> <br />
| style="border-bottom:1px dotted silver;" | KVM<br> <br />
| style="border-bottom:1px dotted silver;" | YES<br> <br />
| style="border-bottom:1px dotted silver;" | [https://rocci.iihe.ac.be:11443/ OCCI]<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
- 160 physical cores + HT, 512 GB RAM <br />
<br />
- 10 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 16 VCPUs, 32GB of RAM, local HD 200 GB <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2017-12-11<br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Jan Astalos, Miroslav Dobrucky <br />
| style="border-bottom:1px dotted silver;" | IISAS-Nebula <br />
| style="border-bottom:1px dotted silver;" | Certified (2017-01-25) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5 <br />
| style="border-bottom:1px dotted silver;" | No plans.<br> <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | CMD-ONE-1 except occi-server (occi-server version installed is 2.0.0.alpha.1 which is needed for GPU support) <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | [https://nebula2.ui.savba.sk:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | 32 Cores with 4GB RAM per core and 4 GPUs K20m - Storage 147GB <br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 14 CPU cores, 2GPU, 56GB of RAM, 830GB HD <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2017-11-30<br />
|-<br />
| style="border-bottom:1px dotted silver;" | IFIN-HH <br />
| style="border-bottom:1px dotted silver;" | RO <br />
| style="border-bottom:1px dotted silver;" | Ionut Vasile <br />
| style="border-bottom:1px dotted silver;" | Dragos Ciobanu-Zabet <br />
| style="border-bottom:1px dotted silver;" | CLOUDIFIN <br />
| style="border-bottom:1px dotted silver;" | Certified (2017-03-01) <br />
| style="border-bottom:1px dotted silver;" | OpenStack Newton <br />
| style="border-bottom:1px dotted silver;" | Upgrade to OpenStack Pike or Queens, whichever is supported at the time of the upgrade. Estimated upgrade Q4 2018. <br />
| style="border-bottom:1px dotted silver;" | yes <br />
| style="border-bottom:1px dotted silver;" | KVM <br />
| style="border-bottom:1px dotted silver;" | Yes<br> <br />
| style="border-bottom:1px dotted silver;" | [http://cloudctrl.nipne.ro:8787/occi1.1 OCCI] <br />
| style="border-bottom:1px dotted silver;" | 96 Cores with 384 GB RAM <br>- 2TB Storage <br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.large(VCPUs 4, 8GB RAM, Root Disk 80GB)<br> m1.medium(VCPUs 2, 4GB RAM, Root Disk 40GB)<br> m1.nano(VCPUs 1, 64MB RAM)<br> m1.small(VCPUs 1, 2GB RAM, Root Disk 20GB)<br> m1.tiny(VCPUs 1, 512MB RAM, Root Disk 1GB)<br> m1.xlarge(VCPUs 8, 16GB RAM, Root Disk 160GB) <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-05-31<br />
|-<br />
| style="border-bottom:1px dotted silver;" | BITP <br />
| style="border-bottom:1px dotted silver;" | UA <br />
| style="border-bottom:1px dotted silver;" | Volodymyr Yurchenko <br />
| style="border-bottom:1px dotted silver;" | Volodymyr Yurchenko <br />
| style="border-bottom:1px dotted silver;" | UA-BITP <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Newton <br />
| style="border-bottom:1px dotted silver;" | Upgrade to OpenStack Queens during July-August 2018 (not possible until now due to quite intensive resource usage); a downtime will be needed <br />
| style="border-bottom:1px dotted silver;" | Yes<br> <br />
| style="border-bottom:1px dotted silver;" | KVM<br> <br />
| style="border-bottom:1px dotted silver;" | Yes<br> <br />
| style="border-bottom:1px dotted silver;" | [http://cloud-main.bitp.kiev.ua:8787 OCCI] <br> [https://cloud-main.bitp.kiev.ua:5001/v2.0 OpenStack] <br />
| style="border-bottom:1px dotted silver;" | Total 56 virtual cores (28 physical CPUs), 62 GB RAM, 2.9 TB storage, <br />
all available for fedcloud.egi.eu through an SLA <br> <br />
<br />
| style="border-bottom:1px dotted silver;" | Max size of VM: 8 cores, 16 GB RAM, 160 GB storage <br />
| style="border-bottom:1px dotted silver;" | Planned for July-August 2018 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | 2018-06-18-12-11<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | INAF-TRIESTE-STACK <br />
| style="border-bottom:1px dotted silver;" | Candidate (13-11-2017) <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Yes/No <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|}<br />
<br />
== Suspended or closed sites ==<br />
<br />
'''Last update: November 2017''' <br />
<br />
{| cellspacing="0" cellpadding="5" style="border:1px solid black; text-align:left;" class="wikitable sortable"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | Affiliation <br />
! style="border-bottom:1px solid black;" | CC <br />
! style="border-bottom:1px solid black;" | Main contact <br />
! style="border-bottom:1px solid black;" | Deputies <br />
! style="border-bottom:1px solid black;" | Host Site <br />
! style="border-bottom:1px solid black;" | Production status <br />
! style="border-bottom:1px solid black;" | CMF Version <br />
! style="border-bottom:1px solid black;" | Supporting fedcloud.egi.eu <br />
! style="border-bottom:1px solid black;" | Resource Endpoints <br />
! style="border-bottom:1px solid black;" | Committed Production Launch resources <br />
! style="border-bottom:1px solid black;" | VM maximum size (memory, cpu, storage) <br />
! style="border-bottom:1px solid black;" | IPv6 readiness of the network at the site <br />
! style="border-bottom:1px solid black;" | Last update timestamp YYY-MM-DD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
UNIZAR / BIFI <br />
<br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Ruben Valles <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | BIFI <br />
| style="border-bottom:1px dotted silver;" | Suspended (2017-08-01) <br />
| style="border-bottom:1px dotted silver;" | OpenStack (Newton / Icehouse) <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | <br />
[http://server4-ciencias.bifi.unizar.es:8787 OCCI] (Newton)<br> [http://server4-eupt.unizar.es:8787 OCCI] (Icehouse) <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
- 720 Cores with 740 GB RAM <br />
<br />
- 10 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
'''Newton''': m1.xlarge, VCPUs 8, Root Disk 20 GB, Ephemeral Disk 0 GB, Total Disk 20 GB, RAM 16,384 MB '''Icehouse''': Name m1.xlarge, VCPUs 8, Root Disk 160 GB, Ephemeral Disk 0 GB, Total Disk 160 GB, RAM 16,384 MB <br />
<br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" rowspan="3" | INFN - Catania <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | IT <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | Roberto Barbera <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | Giuseppe La Rocca, Diego Scardac, Giuseppe Platania <br />
| style="border-bottom:1px dotted silver;" | INFN-CATANIA-NEBULA <br />
| style="border-bottom:1px dotted silver;" | Suspended (2017-04-27) https://ggus.eu/index.php?mode=ticket_info&amp;ticket_id=126716 <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nebula-server-01.ct.infn.it:9000/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 16 cores, 64 GB RAM <br />
<br />
- 5.4 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | INDIGO-CATANIA-STACK <br />
| style="border-bottom:1px dotted silver;" | Closed (2017-05-02) <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | CETA-CIEMAT <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Guillermo Díaz Herrero, <br />
| style="border-bottom:1px dotted silver;" | <br />
Alfonso Pardo Diaz <br />
<br />
Abel Francisco Paz<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | CETA-GRID <br />
| style="border-bottom:1px dotted silver;" | Cloud resources dismissed (2017-06-09) <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | FZ Jülich <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Björn Hagemeier <br />
| style="border-bottom:1px dotted silver;" | Shahbaz Memon <br />
| style="border-bottom:1px dotted silver;" | FZJ <br />
| style="border-bottom:1px dotted silver;" | Cloud resources dismissed (Dec 2017)<br />
|-<br />
| style="border-bottom:1px dotted silver;" | UKIM <br />
| style="border-bottom:1px dotted silver;" | MK <br />
| style="border-bottom:1px dotted silver;" | Boro Jakimovski <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | MK-04-FINKICLOUD <br />
| style="border-bottom:1px dotted silver;" | Suspended (2017-10-12) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5.2 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nebula.finki.ukim.mk:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 100 Cores with 24 GB RAM - 1 TB Storage <br />
<br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 4 CPUs, 8GB Memory, 200GB disk<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|}<br />
<br />
[[Category:Federated_Cloud]]</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Federated_Cloud_infrastructure_status&diff=98158
Federated Cloud infrastructure status
2017-12-11T15:43:48Z
<p>Verlato: /* Status of the Federated Cloud */</p>
<hr />
<div>{{Fedcloud_Menu}} {{TOC_right}} <br />
<br />
The purposes of this page are <br />
<br />
*providing a snapshot of the resources that are provided by the Federated Cloud infrastructure <br />
*providing information about the sites that are joining, or have expressed interest in joining the FedCloud <br />
*providing the list of sites supporting the fedcloud.egi.eu VO, which is the VO used to allow the evaluation of the FedCloud infrastructure by a given provider<br />
<br />
If you have any comments on the content of this page, please contact '''operations @ egi.eu'''. <br />
<br />
== Status of the Federated Cloud ==<br />
<br />
The table here shows all Resource Centres fully integrated into the Federated Cloud infrastructure and certified through the [[PROC09|EGI Resource Centre Registration and Certification]]. <br />
<br />
The status of all the services is monitored via the [https://argo-mon.egi.eu/nagios/ argo-mon.egi.eu nagios instance] and [https://argo-mon2.egi.eu/nagios/ argo-mon2.egi.eu nagios instance]. <br />
<br />
Details on Resource Centres availability and reliability are available on [http://argo.egi.eu/lavoisier/cloud_reports?accept=html ARGO]. <br />
<br />
Accounting data are available on the [http://accounting.egi.eu/cloud.php EGI Accounting Portal] or in the [http://accounting-devel.egi.eu/cloud.php accounting portal dev instance]. <br />
<br />
'''Last update: February 2017''' <br />
<br />
{| style="border:1px solid black; text-align:left;" class="wikitable sortable" cellspacing="0" cellpadding="5"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | Affiliation <br />
! style="border-bottom:1px solid black;" | CC <br />
! style="border-bottom:1px solid black;" | Main contact <br />
! style="border-bottom:1px solid black;" | Deputies <br />
! style="border-bottom:1px solid black;" | Host Site <br />
! style="border-bottom:1px solid black;" | Production status <br />
! style="border-bottom:1px solid black;" | CMF Version <br />
! style="border-bottom:1px solid black;" | Supporting fedcloud.egi.eu <br />
! style="border-bottom:1px solid black;" | Resource Endpoints <br />
! style="border-bottom:1px solid black;" | Committed Production Launch resources <br />
! style="border-bottom:1px solid black;" | VM maximum size (memory, cpu, storage) <br />
! style="border-bottom:1px solid black;" | IPv6 readiness of the network at the site <br />
! style="border-bottom:1px solid black;" | Pay-per-use <br />
! style="border-bottom:1px solid black;" | Last update timestamp YYYY-MM-DD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
100 Percent IT Ltd <br />
<br />
| style="border-bottom:1px dotted silver;" | UK <br />
| style="border-bottom:1px dotted silver;" | David Blundell <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | 100IT <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka&nbsp; <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://occi-api.100percentit.com:8787/occi1.1 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 120 Cores with 128 GB RAM <br />
<br />
- 16TB Shared storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 cores, 32 GB of RAM, up to 1TB attachable block storage<br> <br />
| style="border-bottom:1px dotted silver;" | Planned for March 2018 <br />
| style="border-bottom:1px dotted silver;" | Yes<br />
| style="border-bottom:1px dotted silver;" | 2017-12-04<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Cyfronet (NGI PL) <br />
| style="border-bottom:1px dotted silver;" | PL <br />
| style="border-bottom:1px dotted silver;" | Tomasz Szepieniec <br />
| style="border-bottom:1px dotted silver;" | Patryk Lasoń, Łukasz Flis <br />
| style="border-bottom:1px dotted silver;" | [https://next.gocdb.eu/portal/index.php?Page_Type=Site&id=966 CYFRONET-CLOUD] <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Juno <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://control.cloud.cyfronet.pl:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 200 Cores with 400 GB RAM <br />
<br />
- 5 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 cores, 16 GB of RAM, 10 GB <br />
| style="border-bottom:1px dotted silver;" | February 2018 <br />
| style="border-bottom:1px dotted silver;" | No<br />
| style="border-bottom:1px dotted silver;" | 2017-12-01<br />
|-<br />
| style="border-bottom:1px dotted silver;" | FCTSG <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Rubén Díez <br />
| style="border-bottom:1px dotted silver;" | Iván Díaz <br />
| style="border-bottom:1px dotted silver;" | CESGA <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://cloud.cesga.es:3202/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | - 288 Cores with 592GB RAM (36 Xeon servers with 8 cores, 16GB RAM and local HD 500GB). <br />
- 160 Cores with 512GB RAM (8 Xeon servers with 20 cores, 64GB RAM and local HD 500GB). - Two data stores of 3TB and 700GB. This infrastructure is used to run several core services for EGI.eu and our capacity is compromised due to that. Imminent migration to a new endpoint (OpenNebula 5); this process will be gradual and due this some resources could be not immediately allocatable. <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 cores, 16GB of RAM, local HD 470 GB <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | Yes<br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | CESNET <br />
| style="border-bottom:1px dotted silver;" | CZ <br />
| style="border-bottom:1px dotted silver;" | Miroslav Ruda <br />
| style="border-bottom:1px dotted silver;" | Boris Parak <br />
| style="border-bottom:1px dotted silver;" | CESNET-MetaCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://carach5.ics.muni.cz:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 416 Cores with 2.4 TB RAM <br />
<br />
- 56.6 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 32 cores, 185 GB of RAM, approx. 3 TB of attached storage <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | No<br />
| style="border-bottom:1px dotted silver;" | 2017-11-30<br />
|-<br />
| style="border-bottom:1px dotted silver;" | FZ Jülich <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Björn Hagemeier <br />
| style="border-bottom:1px dotted silver;" | Shahbaz Memon <br />
| style="border-bottom:1px dotted silver;" | FZJ <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack<br>Kilo <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://egi-cloud.zam.kfa-juelich.de:8787/ OCCI]<br>[https://swift.zam.kfa-juelich.de:8888/ CDMI]<br>[https://fsd-cloud.zam.kfa-juelich.de:5000/v2.0 OpenStack] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 216 Cores with 294 GB RAM <br />
<br />
- 50 TB Storage (~20TB object + ~30TB block) <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
Name:&nbsp;m16<br>16 Cores<br>16 GB RAM<br>Default Disk:&nbsp;20 (Root) + 20 (Ephemeral)<br>1TB attachable block storage<br>Open Ports: 22, 80, 443, 7000-7020 <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | No<br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | GWDG <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Piotr Kasprzak <br />
| style="border-bottom:1px dotted silver;" | Ramin Yahyapour, Philipp Wieder <br />
| style="border-bottom:1px dotted silver;" | GoeGrid <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4 <br />
| style="border-bottom:1px dotted silver;" | NO <br />
| style="border-bottom:1px dotted silver;" | [https://occi.cloud.gwdg.de:3100/ OCCI]<br>[http://cdmi.cloud.gwdg.de:4001 CDMI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 192 Cores with 768 GB RAM <br />
<br />
&nbsp;- 40 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 64 cores, 240 GB RAM, 3 TB disk <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | No<br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | CSIC/IFCA <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Alvaro Lopez Garcia <br />
| style="border-bottom:1px dotted silver;" | Pablo Orviz <br />
| style="border-bottom:1px dotted silver;" | IFCA-LCG2 <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Newton/Ocata <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | <br />
[https://cloud.ifca.es:8787/ OCCI] <br />
<br />
[https://cloud.ifca.es:8774/ OpenStack] <br />
<br />
[https://cephrgw.ifca.es:8080/ Swift] <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
- 7744 Cores:<br />
<br />
&nbsp;&nbsp;&nbsp; - 32 nodes x 8 vcpus x 16GB RAM<br />
<br />
&nbsp;&nbsp;&nbsp; - 36 nodes x 24 vcpus x 48GM RAM<br />
<br />
&nbsp;&nbsp;&nbsp; - 34 nodes x 32 vcpus x 128GB RAM<br />
<br />
&nbsp;&nbsp;&nbsp; - 2 nodes x 80 vcpus x 1TB RAM<br> <br />
<br />
&nbsp;&nbsp;&nbsp; - 24 nodes x 32 vcpus x 32GB RAM<br />
<br />
&nbsp;&nbsp;&nbsp; - 144 nodes x 32 vcpus x 32GB RAM<br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
cm4.4xlarge: 32 vCPUs, 160GB HD, 64GB RAM <br />
<br />
x1.20xlarge: 80 vCPUS, 100GB HD, 1TB RAM (upon request) <br />
<br />
| style="border-bottom:1px dotted silver;" | Not ready (planned)<br />
| style="border-bottom:1px dotted silver;" | Yes<br />
| style="border-bottom:1px dotted silver;" | 2017-12-11<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
GRNET<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | GR <br />
| style="border-bottom:1px dotted silver;" | Kostas Koumantaros <br />
| style="border-bottom:1px dotted silver;" | &nbsp; Kyriakos Ginis <br />
| style="border-bottom:1px dotted silver;" | HG-09-Okeanos-Cloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | Synnefo <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://okeanos-occi2.hellasgrid.gr:9000/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 70 CPUs with 220 GB RAM <br />
<br />
<br> - 2 ΤΒ storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 24 Cores, 192GB RAM, 500GB HD&nbsp; <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | No<br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Miroslav Dobrucky, Jan Astalos <br />
| style="border-bottom:1px dotted silver;" | IISAS-FedCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nova.ui.savba.sk:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 168 Cores with 1.5 GB RAM per core <br />
<br />
&nbsp;- 9 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.xlarge: 8 VCPUs, 16GB RAM, 160GB HD <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | Yes/No<br />
| style="border-bottom:1px dotted silver;" | 2017-11-30<br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Jan Astalos, Miroslav Dobrucky <br />
| style="border-bottom:1px dotted silver;" | IISAS-GPUCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nova3.ui.savba.sk:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 48 Cores with 4GB RAM per core, 6 GPUs K20m <br />
<br />
&nbsp;- 6 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <span style="color: rgb(51, 51, 51); font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 13px; line-height: 18.5714px; background-color: rgb(245, 245, 245);">gpu2cpu12</span><span style="font-size: 13.28px; line-height: 1.5em;">: 12 VCPUs, 48GB RAM, 200GB HD, 2 GPU Tesla K20m</span> <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | Yes<br />
| style="border-bottom:1px dotted silver;" | 2017-11-30<br />
|-<br />
| style="border-bottom:1px dotted silver;" rowspan="3" | INFN - Catania <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | IT <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | Roberto Barbera <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | Giuseppe Platania <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN-CATANIA-STACK <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://stack-server-01.ct.infn.it:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 16 Cores with 65 GB RAM <br />
<br />
- 16 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | VM maximum size = m1.xlarge (16384 MB, 8 VCPU, 160 GB)<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | Yes/No<br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - Bari <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Marica Antonacci <br />
| style="border-bottom:1px dotted silver;" | Giacinto Donvito, Stefano Nicotri, Vincenzo Spinoso <br />
| style="border-bottom:1px dotted silver;" | RECAS-BARI (was:PRISMA-INFN-BARI) <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [http://cloud.recas.ba.infn.it:8787/occi OCCI] (no CDMI endpoint provided at the moment, it will be back in the near future) <br />
| style="border-bottom:1px dotted silver;" | <br />
- 300 Cores with 600GB of RAM <br />
<br />
- 50 TB Storage. <br />
<br />
| style="border-bottom:1px dotted silver;" | Flavor "m1.xxlarge": 24 cores, 48GB RAM, 100 GB disk. Up to 500GB block storage can be attached on-demand. <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | Yes<br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN-Padova <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Marco Verlato <br />
| style="border-bottom:1px dotted silver;" | Federica Fanzago, Matteo Segatta <br />
| style="border-bottom:1px dotted silver;" | [https://next.gocdb.eu/portal/index.php?Page_Type=Site&id=1024 INFN-PADOVA-STACK] <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Newton <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://egi-cloud.pd.infn.it:8787/occi1.1/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 120 Cores with 240 GB RAM <br />
<br />
- 2.2TB of overall block storage and 1.8TB of ephemeral storage per compute node <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
m1.hpc: 24 VCPUs, 46GB RAM, 160GB HD, up to 1TB attachable block storage <br />
<br />
Up to 24 public IPs <br />
<br />
| style="border-bottom:1px dotted silver;" | Not ready (planned)<br />
| style="border-bottom:1px dotted silver;" | No<br />
| style="border-bottom:1px dotted silver;" | 2017-12-11<br />
|-<br />
| style="border-bottom:1px dotted silver;" | TUBITAK ULAKBIM <br />
| style="border-bottom:1px dotted silver;" | TR <br />
| style="border-bottom:1px dotted silver;" | Feyza Eryol <br />
| style="border-bottom:1px dotted silver;" | Onur Temizsoylu <br />
| style="border-bottom:1px dotted silver;" | TR-FC1-ULAKBIM <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [http://fcctrl.ulakbim.gov.tr:8787/occi1.2 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 168 Cores with 896 GB RAM&nbsp; <br />
<br />
- 40 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 4 CPUs, 16GB Memory, 40GB disk<br> <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | Yes<br />
| style="border-bottom:1px dotted silver;" | 2017-12-04<br />
|-<br />
| style="border-bottom:1px dotted silver;" | UKIM <br />
| style="border-bottom:1px dotted silver;" | MK <br />
| style="border-bottom:1px dotted silver;" | Boro Jakimovski <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | MK-04-FINKICLOUD <br />
| style="border-bottom:1px dotted silver;" | Certified (2017-08-17) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5.2 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nebula.finki.ukim.mk:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 100 Cores with 24 GB RAM - 1 TB Storage <br />
<br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 4 CPUs, 8GB Memory, 200GB disk<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | No<br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | IPHC <br />
| style="border-bottom:1px dotted silver;" | FR <br />
| style="border-bottom:1px dotted silver;" | Jerome Pansanel <br />
| style="border-bottom:1px dotted silver;" | Sebastien Geiger <br />
| style="border-bottom:1px dotted silver;" | IN2P3-IRES <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Newton<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://sbgcloud.in2p3.fr:8787 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 480 Cores with 2336 GB RAM - 160 TB storage (Cinder / CEPH) <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.12xlarge-hugemem (CPU: 48, RAM: 512 GB, disk: 160 GB) <br> Monitoring&nbsp;: <br />
*https://cloudmon.egi.eu/nagios/cgi-bin/extinfo.cgi?type=1&amp;host=sbgbdii1.in2p3.fr <br />
*https://cloudmon.egi.eu/nagios/cgi-bin/extinfo.cgi?type=1&amp;host=sbgcloud.in2p3.fr<br />
<br />
| style="border-bottom:1px dotted silver;" | Planned for February 2018 <br />
| style="border-bottom:1px dotted silver;" | No<br />
| style="border-bottom:1px dotted silver;" | 2017-11-30<br />
|-<br />
| style="border-bottom:1px dotted silver;" | NCG-INGRID-PT / LIP <br />
| style="border-bottom:1px dotted silver;" | PT <br />
| style="border-bottom:1px dotted silver;" | Mario David <br />
| style="border-bottom:1px dotted silver;" | Joao Pina, Joao Martins <br />
| style="border-bottom:1px dotted silver;" | NCG-INGRID-PT <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Openstack Mitaka<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
- 80 Cores with 192 GB RAM <br />
<br />
- 3 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | Name m1.xlarge, VCPUs 8, Root Disk 160 GB, Ephemeral Disk 0 GB, Total Disk 160 GB, RAM 16,384 MB <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | No<br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | Polythecnic University of Valencia, I3M <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Ignacio Blanquer <br />
| style="border-bottom:1px dotted silver;" | Carlos de Alfonso <br />
| style="border-bottom:1px dotted silver;" | UPV-GRyCAP <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://fc-one.i3m.upv.es:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 128 Cores with 192 GB RAM <br />
<br />
- 5 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 4 VCPUS, 16GB RAM, 160GB HD <br />
| style="border-bottom:1px dotted silver;" | By 2018 <br />
| style="border-bottom:1px dotted silver;" | No<br />
| style="border-bottom:1px dotted silver;" | 2017-12-5<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Fraunhofer SCAI <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Horst Schwichtenberg <br />
| style="border-bottom:1px dotted silver;" | Andre Gemuend <br />
| style="border-bottom:1px dotted silver;" | SCAI <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://fc.scai.fraunhofer.de:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 128 physical cores + HT, 244 GB RAM <br />
<br />
- 20 TB Storage (Glance &amp; Cinder) <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 VCPUS, 16GB RAM, 160GB HD <br />
| style="border-bottom:1px dotted silver;" | Planned for second quarter 2018 <br />
| style="border-bottom:1px dotted silver;" | No<br />
| style="border-bottom:1px dotted silver;" | 07.12.2017<br />
|-<br />
| style="border-bottom:1px dotted silver;" | BELSPO <br />
| style="border-bottom:1px dotted silver;" | BE<br> <br />
| style="border-bottom:1px dotted silver;" | Stephane GERARD <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | BEgrid-BELNET <br />
| style="border-bottom:1px dotted silver;" | Certified (2016-12) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5 <br />
| style="border-bottom:1px dotted silver;" | YES<br> <br />
| style="border-bottom:1px dotted silver;" | [https://rocci.iihe.ac.be:11443/ OCCI]<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
- 160 physical cores + HT, 512 GB RAM <br />
<br />
- 10 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 16 VCPUs, 32GB of RAM, local HD 200 GB <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | No<br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Jan Astalos, Miroslav Dobrucky <br />
| style="border-bottom:1px dotted silver;" | IISAS-Nebula <br />
| style="border-bottom:1px dotted silver;" | Certified (2017-01-25) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | [https://nebula2.ui.savba.sk:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | 32 Cores with 4GB RAM per core and 4 GPUs K20m - Storage 147GB <br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 14 CPU cores, 2GPU, 56GB of RAM, 830GB HD <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | No<br />
| style="border-bottom:1px dotted silver;" | 2017-11-30<br />
|-<br />
| style="border-bottom:1px dotted silver;" | IFIN-HH <br />
| style="border-bottom:1px dotted silver;" | RO <br />
| style="border-bottom:1px dotted silver;" | Ionut Vasile <br />
| style="border-bottom:1px dotted silver;" | Dragos Ciobanu-Zabet <br />
| style="border-bottom:1px dotted silver;" | CLOUDIFIN <br />
| style="border-bottom:1px dotted silver;" | Certified (2017-03-01) <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | Yes<br> <br />
| style="border-bottom:1px dotted silver;" | [http://cloud-ctrl.nipne.ro:8787/occi1.1 OCCI] <br />
| style="border-bottom:1px dotted silver;" | 96 Cores with 384 GB RAM <br>- 2TB Storage <br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.large(VCPUs 4, 8GB RAM, Root Disk 80GB)<br> m1.medium(VCPUs 2, 4GB RAM, Root Disk 40GB)<br> m1.nano(VCPUs 1, 64MB RAM)<br> m1.small(VCPUs 1, 2GB RAM, Root Disk 20GB)<br> m1.tiny(VCPUs 1, 512MB RAM, Root Disk 1GB)<br> m1.xlarge(VCPUs 8, 16GB RAM, Root Disk 160GB) <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | No<br />
| style="border-bottom:1px dotted silver;" | 2017-11-30<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | UA <br />
| style="border-bottom:1px dotted silver;" | Volodymyr Yurchenko <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | UA-BITP <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Newton <br />
| style="border-bottom:1px dotted silver;" | Yes<br> <br />
| style="border-bottom:1px dotted silver;" | [http://cloud-main.bitp.kiev.ua:8787 OCCI] <br> [https://cloud-main.bitp.kiev.ua:5001/v2.0 OpenStack] <br />
| style="border-bottom:1px dotted silver;" | Total 56 virtual cores (28 physical CPUs), 62 GB RAM, 2.9 TB storage, <br />
all available for fedcloud.egi.eu through an SLA <br> <br />
<br />
| style="border-bottom:1px dotted silver;" | Max size of VM: 8 cores, 16 GB RAM, 160 GB storage <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | No<br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | INAF-TRIESTE-STACK <br />
| style="border-bottom:1px dotted silver;" | Candidate (13-11-2017) <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | Yes/No<br />
| style="border-bottom:1px dotted silver;" | <br />
|}<br />
<br />
== Suspended or closed sites ==<br />
<br />
'''Last update: November 2017''' <br />
<br />
{| style="border:1px solid black; text-align:left;" class="wikitable sortable" cellspacing="0" cellpadding="5"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | Affiliation <br />
! style="border-bottom:1px solid black;" | CC <br />
! style="border-bottom:1px solid black;" | Main contact <br />
! style="border-bottom:1px solid black;" | Deputies <br />
! style="border-bottom:1px solid black;" | Host Site <br />
! style="border-bottom:1px solid black;" | Production status <br />
! style="border-bottom:1px solid black;" | CMF Version <br />
! style="border-bottom:1px solid black;" | Supporting fedcloud.egi.eu <br />
! style="border-bottom:1px solid black;" | Resource Endpoints <br />
! style="border-bottom:1px solid black;" | Committed Production Launch resources <br />
! style="border-bottom:1px solid black;" | VM maximum size (memory, cpu, storage) <br />
! style="border-bottom:1px solid black;" | IPv6 readiness of the network at the site <br />
! style="border-bottom:1px solid black;" | Last update timestamp YYY-MM-DD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
UNIZAR / BIFI <br />
<br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Ruben Valles <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | BIFI <br />
| style="border-bottom:1px dotted silver;" | Suspended (2017-08-01) <br />
| style="border-bottom:1px dotted silver;" | OpenStack (Newton / Icehouse) <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | <br />
[http://server4-ciencias.bifi.unizar.es:8787 OCCI] (Newton)<br> [http://server4-eupt.unizar.es:8787 OCCI] (Icehouse) <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
- 720 Cores with 740 GB RAM <br />
<br />
- 10 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
'''Newton''': m1.xlarge, VCPUs 8, Root Disk 20 GB, Ephemeral Disk 0 GB, Total Disk 20 GB, RAM 16,384 MB '''Icehouse''': Name m1.xlarge, VCPUs 8, Root Disk 160 GB, Ephemeral Disk 0 GB, Total Disk 160 GB, RAM 16,384 MB <br />
<br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" rowspan="3" | INFN - Catania <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | IT <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | Roberto Barbera <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | Giuseppe La Rocca, Diego Scardac, Giuseppe Platania <br />
| style="border-bottom:1px dotted silver;" | INFN-CATANIA-NEBULA <br />
| style="border-bottom:1px dotted silver;" | Suspended (2017-04-27) https://ggus.eu/index.php?mode=ticket_info&amp;ticket_id=126716 <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nebula-server-01.ct.infn.it:9000/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 16 cores, 64 GB RAM <br />
<br />
- 5.4 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | INDIGO-CATANIA-STACK <br />
| style="border-bottom:1px dotted silver;" | Closed (2017-05-02) <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | CETA-CIEMAT <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Guillermo Díaz Herrero, <br />
| style="border-bottom:1px dotted silver;" | <br />
Alfonso Pardo Diaz <br />
<br />
Abel Francisco Paz<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | CETA-GRID <br />
| style="border-bottom:1px dotted silver;" | Cloud resources dismissed (2017-06-09) <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | NCG-INGRID-PT / LIP <br />
| style="border-bottom:1px dotted silver;" | PT <br />
| style="border-bottom:1px dotted silver;" | Mario David <br />
| style="border-bottom:1px dotted silver;" | Joao Pina, Joao Martins <br />
| style="border-bottom:1px dotted silver;" | NCG-INGRID-PT <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Openstack Mitaka<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
- 80 Cores with 192 GB RAM <br />
<br />
- 3 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | Name m1.xlarge, VCPUs 8, Root Disk 160 GB, Ephemeral Disk 0 GB, Total Disk 160 GB, RAM 16,384 MB <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|}<br />
<br />
[[Category:Federated_Cloud]]</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Federated_Cloud_infrastructure_status&diff=98110
Federated Cloud infrastructure status
2017-12-06T17:09:40Z
<p>Verlato: /* Status of the Federated Cloud */</p>
<hr />
<div>{{Fedcloud_Menu}} {{TOC_right}} <br />
<br />
The purposes of this page are <br />
<br />
*providing a snapshot of the resources that are provided by the Federated Cloud infrastructure <br />
*providing information about the sites that are joining, or have expressed interest in joining the FedCloud <br />
*providing the list of sites supporting the fedcloud.egi.eu VO, which is the VO used to allow the evaluation of the FedCloud infrastructure by a given provider<br />
<br />
If you have any comments on the content of this page, please contact '''operations @ egi.eu'''. <br />
<br />
== Status of the Federated Cloud ==<br />
<br />
The table here shows all Resource Centres fully integrated into the Federated Cloud infrastructure and certified through the [[PROC09|EGI Resource Centre Registration and Certification]]. <br />
<br />
The status of all the services is monitored via the [https://argo-mon.egi.eu/nagios/ argo-mon.egi.eu nagios instance] and [https://argo-mon2.egi.eu/nagios/ argo-mon2.egi.eu nagios instance]. <br />
<br />
Details on Resource Centres availability and reliability are available on [http://argo.egi.eu/lavoisier/cloud_reports?accept=html ARGO]. <br />
<br />
Accounting data are available on the [http://accounting.egi.eu/cloud.php EGI Accounting Portal] or in the [http://accounting-devel.egi.eu/cloud.php accounting portal dev instance]. <br />
<br />
'''Last update: February 2017''' <br />
<br />
{| style="border:1px solid black; text-align:left;" class="wikitable sortable" cellspacing="0" cellpadding="5"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | Affiliation <br />
! style="border-bottom:1px solid black;" | CC <br />
! style="border-bottom:1px solid black;" | Main contact <br />
! style="border-bottom:1px solid black;" | Deputies <br />
! style="border-bottom:1px solid black;" | Host Site <br />
! style="border-bottom:1px solid black;" | Production status <br />
! style="border-bottom:1px solid black;" | CMF Version <br />
! style="border-bottom:1px solid black;" | Supporting fedcloud.egi.eu <br />
! style="border-bottom:1px solid black;" | Resource Endpoints <br />
! style="border-bottom:1px solid black;" | Committed Production Launch resources <br />
! style="border-bottom:1px solid black;" | VM maximum size (memory, cpu, storage) <br />
! style="border-bottom:1px solid black;" | IPv6 readiness of the network at the site <br />
! style="border-bottom:1px solid black;" | Last update timestamp YYYY-MM-DD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
100 Percent IT Ltd <br />
<br />
| style="border-bottom:1px dotted silver;" | UK <br />
| style="border-bottom:1px dotted silver;" | David Blundell <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | 100IT <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka&nbsp; <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://occi-api.100percentit.com:8787/occi1.1 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 120 Cores with 128 GB RAM <br />
<br />
- 16TB Shared storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 cores, 32 GB of RAM, up to 1TB attachable block storage<br> <br />
| style="border-bottom:1px dotted silver;" | Planned for March 2018<br />
| style="border-bottom:1px dotted silver;" | 2017-12-04<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Cyfronet (NGI PL) <br />
| style="border-bottom:1px dotted silver;" | PL <br />
| style="border-bottom:1px dotted silver;" | Tomasz Szepieniec <br />
| style="border-bottom:1px dotted silver;" | Patryk Lasoń, Łukasz Flis <br />
| style="border-bottom:1px dotted silver;" | [https://next.gocdb.eu/portal/index.php?Page_Type=Site&id=966 CYFRONET-CLOUD] <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Juno <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://control.cloud.cyfronet.pl:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 200 Cores with 400 GB RAM <br />
<br />
- 5 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 cores, 16 GB of RAM, 10 GB <br />
| style="border-bottom:1px dotted silver;" | February 2018<br />
| style="border-bottom:1px dotted silver;" | 2017-12-01<br />
|-<br />
| style="border-bottom:1px dotted silver;" | FCTSG <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Rubén Díez <br />
| style="border-bottom:1px dotted silver;" | Iván Díaz <br />
| style="border-bottom:1px dotted silver;" | CESGA <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://cloud.cesga.es:3202/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | - 288 Cores with 592GB RAM (36 Xeon servers with 8 cores, 16GB RAM and local HD 500GB). <br />
- 160 Cores with 512GB RAM (8 Xeon servers with 20 cores, 64GB RAM and local HD 500GB). - Two data stores of 3TB and 700GB. This infrastructure is used to run several core services for EGI.eu and our capacity is compromised due to that. Imminent migration to a new endpoint (OpenNebula 5); this process will be gradual and due this some resources could be not immediately allocatable. <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 cores, 16GB of RAM, local HD 470 GB <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | CESNET <br />
| style="border-bottom:1px dotted silver;" | CZ <br />
| style="border-bottom:1px dotted silver;" | Miroslav Ruda <br />
| style="border-bottom:1px dotted silver;" | Boris Parak <br />
| style="border-bottom:1px dotted silver;" | CESNET-MetaCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://carach5.ics.muni.cz:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 416 Cores with 2.4 TB RAM <br />
<br />
- 56.6 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 32 cores, 185 GB of RAM, approx. 3 TB of attached storage <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | 2017-11-30<br />
|-<br />
| style="border-bottom:1px dotted silver;" | FZ Jülich <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Björn Hagemeier <br />
| style="border-bottom:1px dotted silver;" | Shahbaz Memon <br />
| style="border-bottom:1px dotted silver;" | FZJ <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack<br>Kilo <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://egi-cloud.zam.kfa-juelich.de:8787/ OCCI]<br>[https://swift.zam.kfa-juelich.de:8888/ CDMI]<br>[https://fsd-cloud.zam.kfa-juelich.de:5000/v2.0 OpenStack] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 216 Cores with 294 GB RAM <br />
<br />
- 50 TB Storage (~20TB object + ~30TB block) <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
Name:&nbsp;m16<br>16 Cores<br>16 GB RAM<br>Default Disk:&nbsp;20 (Root) + 20 (Ephemeral)<br>1TB attachable block storage<br>Open Ports: 22, 80, 443, 7000-7020 <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | GWDG <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Piotr Kasprzak <br />
| style="border-bottom:1px dotted silver;" | Ramin Yahyapour, Philipp Wieder <br />
| style="border-bottom:1px dotted silver;" | GoeGrid <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4 <br />
| style="border-bottom:1px dotted silver;" | NO <br />
| style="border-bottom:1px dotted silver;" | [https://occi.cloud.gwdg.de:3100/ OCCI]<br>[http://cdmi.cloud.gwdg.de:4001 CDMI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 192 Cores with 768 GB RAM <br />
<br />
&nbsp;- 40 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 64 cores, 240 GB RAM, 3 TB disk <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | CSIC/IFCA <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Alvaro Lopez Garcia <br />
| style="border-bottom:1px dotted silver;" | Pablo Orviz <br />
| style="border-bottom:1px dotted silver;" | IFCA-LCG2 <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | <br />
[https://cloud.ifca.es:8787/ OCCI] <br />
<br />
[https://cloud.ifca.es:8774/ OpenStack] <br />
<br />
[https://cephrgw.ifca.es:8080/ Swift] <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
- 2368 Cores ( 32 nodes x 8 vcpus x 16GB RAM -&nbsp; 36 nodes x 24 vcpus x 48GM RAM - 34 nodes x 32 vcpus x 128GB RAM - 2 nodes x 80 vcpus x 1TB RAM )<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
cm4.4xlarge: 32 vCPUs, 160GB HD, 64GB RAM <br />
<br />
x1.20xlarge: 80 vCPUS, 100GB HD, 1TB RAM (upon request) <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
GRNET<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | GR <br />
| style="border-bottom:1px dotted silver;" | Kostas Koumantaros <br />
| style="border-bottom:1px dotted silver;" | &nbsp; Kyriakos Ginis <br />
| style="border-bottom:1px dotted silver;" | HG-09-Okeanos-Cloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | Synnefo <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://okeanos-occi2.hellasgrid.gr:9000/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 70 CPUs with 220 GB RAM <br />
<br />
<br> - 2 ΤΒ storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 24 Cores, 192GB RAM, 500GB HD&nbsp; <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Miroslav Dobrucky, Jan Astalos <br />
| style="border-bottom:1px dotted silver;" | IISAS-FedCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nova.ui.savba.sk:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 168 Cores with 1.5 GB RAM per core <br />
<br />
&nbsp;- 9 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.xlarge: 8 VCPUs, 16GB RAM, 160GB HD <br />
| style="border-bottom:1px dotted silver;" | Ready<br />
| style="border-bottom:1px dotted silver;" | 2017-11-30<br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Jan Astalos, Miroslav Dobrucky <br />
| style="border-bottom:1px dotted silver;" | IISAS-GPUCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nova3.ui.savba.sk:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 48 Cores with 4GB RAM per core, 6 GPUs K20m <br />
<br />
&nbsp;- 6 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <span style="color: rgb(51, 51, 51); font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 13px; line-height: 18.5714px; background-color: rgb(245, 245, 245);">gpu2cpu12</span><span style="font-size: 13.28px; line-height: 1.5em;">: 12 VCPUs, 48GB RAM, 200GB HD, 2 GPU Tesla K20m</span> <br />
| style="border-bottom:1px dotted silver;" | Ready<br />
| style="border-bottom:1px dotted silver;" | 2017-11-30<br />
|-<br />
| style="border-bottom:1px dotted silver;" rowspan="3" | INFN - Catania <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | IT <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | Roberto Barbera <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | Giuseppe La Rocca, Diego Scardac, Giuseppe Platania <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN-CATANIA-STACK <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://stack-server-01.ct.infn.it:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 16 Cores with 65 GB RAM <br />
<br />
- 16 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | VM maximum size = m1.xlarge (16384 MB, 8 VCPU, 160 GB)<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - Bari <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Marica Antonacci <br />
| style="border-bottom:1px dotted silver;" | Giacinto Donvito, Stefano Nicotri, Vincenzo Spinoso <br />
| style="border-bottom:1px dotted silver;" | RECAS-BARI (was:PRISMA-INFN-BARI) <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [http://cloud.recas.ba.infn.it:8787/occi OCCI] (no CDMI endpoint provided at the moment, it will be back in the near future) <br />
| style="border-bottom:1px dotted silver;" | <br />
- 300 Cores with 600GB of RAM <br />
<br />
- 50 TB Storage. <br />
<br />
| style="border-bottom:1px dotted silver;" | Flavor "m1.xxlarge": 24 cores, 48GB RAM, 100 GB disk. Up to 500GB block storage can be attached on-demand. <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN-Padova <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Marco Verlato <br />
| style="border-bottom:1px dotted silver;" | Federica Fanzago, Matteo Segatta <br />
| style="border-bottom:1px dotted silver;" | [https://next.gocdb.eu/portal/index.php?Page_Type=Site&id=1024 INFN-PADOVA-STACK] <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Newton <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://egi-cloud.pd.infn.it:8787/occi1.1/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 120 Cores with 240 GB RAM <br />
<br />
- 2.2TB of overall block storage and 1.8TB of ephemeral storage per compute node <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
m1.hpc: 24 VCPUs, 46GB RAM, 160GB HD, up to 1TB attachable block storage <br />
<br />
Up to 24 public IPs <br />
<br />
| style="border-bottom:1px dotted silver;" | Not ready<br />
| style="border-bottom:1px dotted silver;" | 2017-12-06<br />
|-<br />
| style="border-bottom:1px dotted silver;" | TUBITAK ULAKBIM <br />
| style="border-bottom:1px dotted silver;" | TR <br />
| style="border-bottom:1px dotted silver;" | Feyza Eryol <br />
| style="border-bottom:1px dotted silver;" | Onur Temizsoylu <br />
| style="border-bottom:1px dotted silver;" | TR-FC1-ULAKBIM <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [http://fcctrl.ulakbim.gov.tr:8787/occi1.2 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 168 Cores with 896 GB RAM&nbsp; <br />
<br />
- 40 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 4 CPUs, 16GB Memory, 40GB disk<br> <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | 2017-12-04<br />
|-<br />
| style="border-bottom:1px dotted silver;" | UKIM <br />
| style="border-bottom:1px dotted silver;" | MK <br />
| style="border-bottom:1px dotted silver;" | Boro Jakimovski <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | MK-04-FINKICLOUD <br />
| style="border-bottom:1px dotted silver;" | Certified (2017-08-17) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5.2 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nebula.finki.ukim.mk:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 100 Cores with 24 GB RAM - 1 TB Storage <br />
<br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 4 CPUs, 8GB Memory, 200GB disk<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | IPHC <br />
| style="border-bottom:1px dotted silver;" | FR <br />
| style="border-bottom:1px dotted silver;" | Jerome Pansanel <br />
| style="border-bottom:1px dotted silver;" | Sebastien Geiger <br />
| style="border-bottom:1px dotted silver;" | IN2P3-IRES <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Newton<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://sbgcloud.in2p3.fr:8787 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 480 Cores with 2336 GB RAM - 160 TB storage (Cinder / CEPH)<br />
<br />
| style="border-bottom:1px dotted silver;" | m1.12xlarge-hugemem (CPU: 48, RAM: 512 GB, disk: 160 GB) <br> Monitoring&nbsp;: <br />
*https://cloudmon.egi.eu/nagios/cgi-bin/extinfo.cgi?type=1&amp;host=sbgbdii1.in2p3.fr <br />
*https://cloudmon.egi.eu/nagios/cgi-bin/extinfo.cgi?type=1&amp;host=sbgcloud.in2p3.fr<br />
<br />
| style="border-bottom:1px dotted silver;" | Planned for February 2018<br />
| style="border-bottom:1px dotted silver;" | 2017-11-30<br />
|-<br />
| style="border-bottom:1px dotted silver;" | NCG-INGRID-PT / LIP <br />
| style="border-bottom:1px dotted silver;" | PT <br />
| style="border-bottom:1px dotted silver;" | Mario David <br />
| style="border-bottom:1px dotted silver;" | Joao Pina, Joao Martins <br />
| style="border-bottom:1px dotted silver;" | NCG-INGRID-PT <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Openstack Mitaka<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
- 80 Cores with 192 GB RAM <br />
<br />
- 3 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | Name m1.xlarge, VCPUs 8, Root Disk 160 GB, Ephemeral Disk 0 GB, Total Disk 160 GB, RAM 16,384 MB <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | Polythecnic University of Valencia, I3M <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Ignacio Blanquer <br />
| style="border-bottom:1px dotted silver;" | Carlos de Alfonso <br />
| style="border-bottom:1px dotted silver;" | UPV-GRyCAP <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://fc-one.i3m.upv.es:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 128 Cores with 192 GB RAM <br />
<br />
- 5 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 4 VCPUS, 16GB RAM, 160GB HD <br />
| style="border-bottom:1px dotted silver;" | By 2018<br />
| style="border-bottom:1px dotted silver;" | 2017-12-5<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Fraunhofer SCAI <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Horst Schwichtenberg <br />
| style="border-bottom:1px dotted silver;" | Andre Gemuend <br />
| style="border-bottom:1px dotted silver;" | SCAI <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://fc.scai.fraunhofer.de:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 128 physical cores + HT, 244 GB RAM <br />
<br />
- 20 TB Storage (Glance &amp; Cinder) <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 VCPUS, 16GB RAM, 160GB HD <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | BELSPO <br />
| style="border-bottom:1px dotted silver;" | BE<br> <br />
| style="border-bottom:1px dotted silver;" | Stephane GERARD <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | BEgrid-BELNET <br />
| style="border-bottom:1px dotted silver;" | Certified (2016-12) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5 <br />
| style="border-bottom:1px dotted silver;" | YES<br> <br />
| style="border-bottom:1px dotted silver;" | [https://rocci.iihe.ac.be:11443/ OCCI]<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
- 160 physical cores + HT, 512 GB RAM <br />
<br />
- 10 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 16 VCPUs, 32GB of RAM, local HD 200 GB <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Jan Astalos, Miroslav Dobrucky <br />
| style="border-bottom:1px dotted silver;" | IISAS-Nebula <br />
| style="border-bottom:1px dotted silver;" | Certified (2017-01-25) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | [https://nebula2.ui.savba.sk:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | 32 Cores with 4GB RAM per core and 4 GPUs K20m - Storage 147GB <br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 14 CPU cores, 2GPU, 56GB of RAM, 830GB HD <br />
| style="border-bottom:1px dotted silver;" | Ready<br />
| style="border-bottom:1px dotted silver;" | 2017-11-30<br />
|-<br />
| style="border-bottom:1px dotted silver;" | IFIN-HH <br />
| style="border-bottom:1px dotted silver;" | RO <br />
| style="border-bottom:1px dotted silver;" | Ionut Vasile <br />
| style="border-bottom:1px dotted silver;" | Dragos Ciobanu-Zabet <br />
| style="border-bottom:1px dotted silver;" | CLOUDIFIN <br />
| style="border-bottom:1px dotted silver;" | Certified (2017-03-01) <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | Yes<br> <br />
| style="border-bottom:1px dotted silver;" | [http://cloud-ctrl.nipne.ro:8787/occi1.1 OCCI] <br />
| style="border-bottom:1px dotted silver;" | 96 Cores with 384 GB RAM <br>- 2TB Storage <br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.large(VCPUs 4, 8GB RAM, Root Disk 80GB)<br> m1.medium(VCPUs 2, 4GB RAM, Root Disk 40GB)<br> m1.nano(VCPUs 1, 64MB RAM)<br> m1.small(VCPUs 1, 2GB RAM, Root Disk 20GB)<br> m1.tiny(VCPUs 1, 512MB RAM, Root Disk 1GB)<br> m1.xlarge(VCPUs 8, 16GB RAM, Root Disk 160GB) <br />
| style="border-bottom:1px dotted silver;" | Ready <br />
| style="border-bottom:1px dotted silver;" | 2017-11-30<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | UA <br />
| style="border-bottom:1px dotted silver;" | Volodymyr Yurchenko <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | UA-BITP <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Newton <br />
| style="border-bottom:1px dotted silver;" | Yes<br> <br />
| style="border-bottom:1px dotted silver;" | [http://cloud-main.bitp.kiev.ua:8787 OCCI] <br> [https://cloud-main.bitp.kiev.ua:5001/v2.0 OpenStack] <br />
| style="border-bottom:1px dotted silver;" | Total 56 virtual cores (28 physical CPUs), 62 GB RAM, 2.9 TB storage, <br />
all available for fedcloud.egi.eu through an SLA <br> <br />
<br />
| style="border-bottom:1px dotted silver;" | Max size of VM: 8 cores, 16 GB RAM, 160 GB storage <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | INAF-TRIESTE-STACK <br />
| style="border-bottom:1px dotted silver;" | Candidate (13-11-2017) <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|}<br />
<br />
== Suspended or closed sites ==<br />
<br />
'''Last update: November 2017''' <br />
<br />
{| style="border:1px solid black; text-align:left;" class="wikitable sortable" cellspacing="0" cellpadding="5"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | Affiliation <br />
! style="border-bottom:1px solid black;" | CC <br />
! style="border-bottom:1px solid black;" | Main contact <br />
! style="border-bottom:1px solid black;" | Deputies <br />
! style="border-bottom:1px solid black;" | Host Site <br />
! style="border-bottom:1px solid black;" | Production status <br />
! style="border-bottom:1px solid black;" | CMF Version <br />
! style="border-bottom:1px solid black;" | Supporting fedcloud.egi.eu <br />
! style="border-bottom:1px solid black;" | Resource Endpoints <br />
! style="border-bottom:1px solid black;" | Committed Production Launch resources <br />
! style="border-bottom:1px solid black;" | VM maximum size (memory, cpu, storage) <br />
! style="border-bottom:1px solid black;" | IPv6 readiness of the network at the site <br />
! style="border-bottom:1px solid black;" | Last update timestamp YYY-MM-DD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
UNIZAR / BIFI <br />
<br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Ruben Valles <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | BIFI <br />
| style="border-bottom:1px dotted silver;" | Suspended (2017-08-01) <br />
| style="border-bottom:1px dotted silver;" | OpenStack (Newton / Icehouse) <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | <br />
[http://server4-ciencias.bifi.unizar.es:8787 OCCI] (Newton)<br> [http://server4-eupt.unizar.es:8787 OCCI] (Icehouse) <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
- 720 Cores with 740 GB RAM <br />
<br />
- 10 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
'''Newton''': m1.xlarge, VCPUs 8, Root Disk 20 GB, Ephemeral Disk 0 GB, Total Disk 20 GB, RAM 16,384 MB '''Icehouse''': Name m1.xlarge, VCPUs 8, Root Disk 160 GB, Ephemeral Disk 0 GB, Total Disk 160 GB, RAM 16,384 MB <br />
<br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" rowspan="3" | INFN - Catania <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | IT <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | Roberto Barbera <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | Giuseppe La Rocca, Diego Scardac, Giuseppe Platania <br />
| style="border-bottom:1px dotted silver;" | INFN-CATANIA-NEBULA <br />
| style="border-bottom:1px dotted silver;" | Suspended (2017-04-27) https://ggus.eu/index.php?mode=ticket_info&amp;ticket_id=126716 <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nebula-server-01.ct.infn.it:9000/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 16 cores, 64 GB RAM <br />
<br />
- 5.4 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | INDIGO-CATANIA-STACK <br />
| style="border-bottom:1px dotted silver;" | Closed (2017-05-02) <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | CETA-CIEMAT <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Guillermo Díaz Herrero, <br />
| style="border-bottom:1px dotted silver;" | <br />
Alfonso Pardo Diaz <br />
<br />
Abel Francisco Paz<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | CETA-GRID <br />
| style="border-bottom:1px dotted silver;" | Cloud resources dismissed (2017-06-09) <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | NCG-INGRID-PT / LIP <br />
| style="border-bottom:1px dotted silver;" | PT <br />
| style="border-bottom:1px dotted silver;" | Mario David <br />
| style="border-bottom:1px dotted silver;" | Joao Pina, Joao Martins <br />
| style="border-bottom:1px dotted silver;" | NCG-INGRID-PT <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Openstack Mitaka<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
- 80 Cores with 192 GB RAM <br />
<br />
- 3 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | Name m1.xlarge, VCPUs 8, Root Disk 160 GB, Ephemeral Disk 0 GB, Total Disk 160 GB, RAM 16,384 MB <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|}<br />
<br />
[[Category:Federated_Cloud]]</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Federated_Cloud_infrastructure_status&diff=96477
Federated Cloud infrastructure status
2017-08-03T12:27:44Z
<p>Verlato: </p>
<hr />
<div>{{Fedcloud_Menu}} {{TOC_right}} <br />
<br />
The purposes of this page are <br />
<br />
*providing a snapshot of the resources that are provided by the Federated Cloud infrastructure <br />
*providing information about the sites that are joining, or have expressed interest in joining the FedCloud <br />
*providing the list of sites supporting the fedcloud.egi.eu VO, which is the VO used to allow the evaluation of the FedCloud infrastructure by a given provider<br />
<br />
If you have any comments on the content of this page, please contact '''operations @ egi.eu'''. <br />
<br />
== Status of the Federated Cloud ==<br />
<br />
The table here shows all Resource Centres fully integrated into the Federated Cloud infrastructure and certified through the [[PROC09|EGI Resource Centre Registration and Certification]]. <br />
<br />
The status of all the services is monitored via the [https://argo-mon.egi.eu/nagios/ argo-mon.egi.eu nagios instance] and [https://argo-mon2.egi.eu/nagios/ argo-mon2.egi.eu nagios instance]. <br />
<br />
Details on Resource Centres availability and reliability are available on [http://argo.egi.eu/lavoisier/cloud_reports?accept=html ARGO]. <br />
<br />
Accounting data are available on the [http://accounting.egi.eu/cloud.php EGI Accounting Portal] or in the [http://accounting-devel.egi.eu/cloud.php accounting portal dev instance]. <br />
<br />
'''Last update: February 2017''' <br />
<br />
{| style="border:1px solid black; text-align:left;" class="wikitable sortable" cellspacing="0" cellpadding="5"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | Affiliation <br />
! style="border-bottom:1px solid black;" | CC <br />
! style="border-bottom:1px solid black;" | Main contact <br />
! style="border-bottom:1px solid black;" | Deputies <br />
! style="border-bottom:1px solid black;" | Host Site <br />
! style="border-bottom:1px solid black;" | Production status <br />
! style="border-bottom:1px solid black;" | CMF Version <br />
! style="border-bottom:1px solid black;" | Supporting fedcloud.egi.eu <br />
! style="border-bottom:1px solid black;" | Resource Endpoints <br />
! style="border-bottom:1px solid black;" | Committed Production Launch resources <br />
! style="border-bottom:1px solid black;" | VM maximum size (memory, cpu, storage)<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
100 Percent IT Ltd <br />
<br />
| style="border-bottom:1px dotted silver;" | UK <br />
| style="border-bottom:1px dotted silver;" | David Blundell <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | 100IT <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka&nbsp; <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://occi-api.100percentit.com:8787/occi1.1 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 120 Cores with 128 GB RAM <br />
<br />
- 16TB Shared storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
UNIZAR / BIFI <br />
<br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Ruben Valles <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | BIFI <br />
| style="border-bottom:1px dotted silver;" | Suspended (2017-08-01) <br />
| style="border-bottom:1px dotted silver;" | OpenStack (Newton / Icehouse) <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | <br />
[http://server4-ciencias.bifi.unizar.es:8787 OCCI] (Newton)<br> [http://server4-eupt.unizar.es:8787 OCCI] (Icehouse) <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
- 720 Cores with 740 GB RAM <br />
<br />
- 10 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
<br />
'''Newton''': m1.xlarge, VCPUs 8, Root Disk 20 GB, Ephemeral Disk 0 GB, Total Disk 20 GB, RAM 16,384 MB<br />
'''Icehouse''': Name m1.xlarge, VCPUs 8, Root Disk 160 GB, Ephemeral Disk 0 GB, Total Disk 160 GB, RAM 16,384 MB <br />
<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Cyfronet (NGI PL) <br />
| style="border-bottom:1px dotted silver;" | PL <br />
| style="border-bottom:1px dotted silver;" | Tomasz Szepieniec <br />
| style="border-bottom:1px dotted silver;" | Patryk Lasoń, Łukasz Flis <br />
| style="border-bottom:1px dotted silver;" | [https://next.gocdb.eu/portal/index.php?Page_Type=Site&id=966 CYFRONET-CLOUD] <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Juno <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://control.cloud.cyfronet.pl:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 200 Cores with 400 GB RAM <br />
<br />
- 5 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 cores, 16 GB of RAM, 10 GB<br />
|-<br />
| style="border-bottom:1px dotted silver;" | FCTSG <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Rubén Díez <br />
| style="border-bottom:1px dotted silver;" | Iván Díaz <br />
| style="border-bottom:1px dotted silver;" | CESGA <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://cloud.cesga.es:3202/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | - 288 Cores with 592GB RAM (36 Xeon servers with 8 cores, 16GB RAM and local HD 500GB). <br />
- 160 Cores with 512GB RAM (8 Xeon servers with 20 cores, 64GB RAM and local HD 500GB). - Two data stores of 3TB and 700GB. This infrastructure is used to run several core services for EGI.eu and our capacity is compromised due to that. Imminent migration to a new endpoint (OpenNebula 5); this process will be gradual and due this some resources could be not immediately allocatable. <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 cores, 16GB of RAM, local HD 470 GB<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CESNET <br />
| style="border-bottom:1px dotted silver;" | CZ <br />
| style="border-bottom:1px dotted silver;" | Miroslav Ruda <br />
| style="border-bottom:1px dotted silver;" | Boris Parak <br />
| style="border-bottom:1px dotted silver;" | CESNET-MetaCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://carach5.ics.muni.cz:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 416 Cores with 2.4 TB RAM <br />
<br />
- 56.6 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 32 cores, 185 GB of RAM, approx. 3 TB of attached storage<br />
|-<br />
| style="border-bottom:1px dotted silver;" | FZ Jülich <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Björn Hagemeier <br />
| style="border-bottom:1px dotted silver;" | Shahbaz Memon <br />
| style="border-bottom:1px dotted silver;" | FZJ <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack<br>Kilo <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://egi-cloud.zam.kfa-juelich.de:8787/ OCCI]<br>[https://swift.zam.kfa-juelich.de:8888/ CDMI]<br>[https://fsd-cloud.zam.kfa-juelich.de:5000/v2.0 OpenStack] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 216 Cores with 294 GB RAM <br />
<br />
- 50 TB Storage (~20TB object + ~30TB block) <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
Name:&nbsp;m16<br>16 Cores<br>16 GB RAM<br>Default Disk:&nbsp;20 (Root) + 20 (Ephemeral)<br>1TB attachable block storage<br>Open Ports: 22, 80, 443, 7000-7020 <br />
<br />
|-<br />
| style="border-bottom:1px dotted silver;" | GWDG <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Piotr Kasprzak <br />
| style="border-bottom:1px dotted silver;" | Ramin Yahyapour, Philipp Wieder <br />
| style="border-bottom:1px dotted silver;" | GoeGrid <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4 <br />
| style="border-bottom:1px dotted silver;" | NO <br />
| style="border-bottom:1px dotted silver;" | [https://occi.cloud.gwdg.de:3100/ OCCI]<br>[http://cdmi.cloud.gwdg.de:4001 CDMI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 192 Cores with 768 GB RAM <br />
<br />
&nbsp;- 40 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 64 cores, 240 GB RAM, 3 TB disk<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CSIC/IFCA <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Alvaro Lopez Garcia <br />
| style="border-bottom:1px dotted silver;" | Pablo Orviz <br />
| style="border-bottom:1px dotted silver;" | IFCA-LCG2 <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | <br />
[https://cloud.ifca.es:8787/ OCCI] <br />
<br />
[https://cloud.ifca.es:8774/ OpenStack] <br />
<br />
[https://cephrgw.ifca.es:8080/ Swift] <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
- 2368 Cores ( 32 nodes x 8 vcpus x 16GB RAM -&nbsp; 36 nodes x 24 vcpus x 48GM RAM - 34 nodes x 32 vcpus x 128GB RAM - 2 nodes x 80 vcpus x 1TB RAM )<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
cm4.4xlarge: 32 vCPUs, 160GB HD, 64GB RAM <br />
<br />
x1.20xlarge: 80 vCPUS, 100GB HD, 1TB RAM (upon request) <br />
<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
GRNET<br><br />
<br />
| style="border-bottom:1px dotted silver;" | GR <br />
| style="border-bottom:1px dotted silver;" | Kostas Koumantaros <br />
| style="border-bottom:1px dotted silver;" | &nbsp; Kyriakos Ginis <br />
| style="border-bottom:1px dotted silver;" | HG-09-Okeanos-Cloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | Synnefo <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://okeanos-occi2.hellasgrid.gr:9000/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 70 CPUs with 220 GB RAM <br />
<br />
<br> - 2 ΤΒ storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 24 Cores, 192GB RAM, 500GB HD&nbsp;<br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Miroslav Dobrucky, Jan Astalos <br />
| style="border-bottom:1px dotted silver;" | IISAS-FedCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nova.ui.savba.sk:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 168 Cores with 1.5 GB RAM per core <br />
<br />
&nbsp;- 9 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.xlarge: 8 VCPUs, 16GB RAM, 160GB HD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Jan Astalos, Miroslav Dobrucky <br />
| style="border-bottom:1px dotted silver;" | IISAS-GPUCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Liberty <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nova3.ui.savba.sk:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 48 Cores with 4GB RAM per core, 6 GPUs K20m <br />
<br />
&nbsp;- 6 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <span style="color: rgb(51, 51, 51); font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 13px; line-height: 18.5714px; background-color: rgb(245, 245, 245);">gpu2cpu12</span><span style="font-size: 13.28px; line-height: 1.5em;">: 12 VCPUs, 48GB RAM, 200GB HD, 2GPU Tesla K20m</span><br />
|-<br />
| style="border-bottom:1px dotted silver;" rowspan="3" | INFN - Catania <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | IT <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | Roberto Barbera <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | Giuseppe La Rocca, Diego Scardac, Giuseppe Platania <br />
| style="border-bottom:1px dotted silver;" | INFN-CATANIA-NEBULA <br />
| style="border-bottom:1px dotted silver;" | Suspended (2017-04-27) https://ggus.eu/index.php?mode=ticket_info&amp;ticket_id=126716 <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nebula-server-01.ct.infn.it:9000/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 16 cores, 64 GB RAM <br />
<br />
- 5.4 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN-CATANIA-STACK <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://stack-server-01.ct.infn.it:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 16 Cores with 65 GB RAM <br />
<br />
- 16 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | VM maximum size = m1.xlarge (16384 MB, 8 VCPU, 160 GB)<br />
|-<br />
| style="border-bottom:1px dotted silver;" | INDIGO-CATANIA-STACK <br />
| style="border-bottom:1px dotted silver;" | Closed (2017-05-02) <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" |<br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" |<br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - Bari <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Marica Antonacci <br />
| style="border-bottom:1px dotted silver;" | Giacinto Donvito, Stefano Nicotri, Vincenzo Spinoso <br />
| style="border-bottom:1px dotted silver;" | RECAS-BARI (was:PRISMA-INFN-BARI) <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [http://cloud.recas.ba.infn.it:8787/occi OCCI] (no CDMI endpoint provided at the moment, it will be back in the near future) <br />
| style="border-bottom:1px dotted silver;" | <br />
- 300 Cores with 600GB of RAM <br />
<br />
- 50 TB Storage. <br />
<br />
| style="border-bottom:1px dotted silver;" | Flavor "m1.xxlarge": 24 cores, 48GB RAM, 100 GB disk. Up to 500GB block storage can be attached on-demand.<br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN-Padova <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Marco Verlato <br />
| style="border-bottom:1px dotted silver;" | Federica Fanzago, Matteo Segatta <br />
| style="border-bottom:1px dotted silver;" | [https://next.gocdb.eu/portal/index.php?Page_Type=Site&id=1024 INFN-PADOVA-STACK] <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Newton <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://egi-cloud.pd.infn.it:8787/occi1.1/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 120 Cores with 240 GB RAM <br />
<br />
- 2.2TB of overall block storage and 1.8TB of ephemeral storage per compute node <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
m1.hpc: 24 VCPUs, 46GB RAM, 160GB HD, up to 1TB attachable block storage <br />
<br />
Up to 24 public IPs <br />
<br />
|-<br />
| style="border-bottom:1px dotted silver;" | TUBITAK ULAKBIM <br />
| style="border-bottom:1px dotted silver;" | TR <br />
| style="border-bottom:1px dotted silver;" | Feyza Eryol <br />
| style="border-bottom:1px dotted silver;" | Onur Temizsoylu <br />
| style="border-bottom:1px dotted silver;" | TR-FC1-ULAKBIM <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [http://fcctrl.ulakbim.gov.tr:8787/occi1.2 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 168 Cores with 896 GB RAM&nbsp; <br />
<br />
- 40 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 4 CPUs, 16GB Memory, 40GB disk<br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | UKIM <br />
| style="border-bottom:1px dotted silver;" | MK <br />
| style="border-bottom:1px dotted silver;" | Boro Jakimovski <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | MK-04-FINKICLOUD <br />
| style="border-bottom:1px dotted silver;" | Suspended (2016-10-20) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5.2<br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nebula.finki.ukim.mk:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 100 Cores with 24 GB RAM - 1 TB Storage <br />
<br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 4 CPUs, 8GB Memory, 200GB disk<br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | CETA-CIEMAT <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Guillermo Díaz Herrero, <br />
| style="border-bottom:1px dotted silver;" | <br />
Alfonso Pardo Diaz <br />
<br />
Abel Francisco Paz<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | CETA-GRID <br />
| style="border-bottom:1px dotted silver;" | Cloud resources dismissed (2017-06-09)<br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" |<br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | IPHC <br />
| style="border-bottom:1px dotted silver;" | FR <br />
| style="border-bottom:1px dotted silver;" | Jerome Pansanel<br />
| style="border-bottom:1px dotted silver;" | Sebastien Geiger<br />
| style="border-bottom:1px dotted silver;" | IN2P3-IRES <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://sbgcloud.in2p3.fr:8787 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 480 Cores with 2336 GB RAM <br />
- 5 TB storage (cinder)<br />
- 40 TB storage (irods)<br />
<br />
| style="border-bottom:1px dotted silver;" | m1.12xlarge-hugemem (CPU: 48, RAM: 512 GB, disk: 160 GB) <br> Monitoring&nbsp;: <br />
*https://cloudmon.egi.eu/nagios/cgi-bin/extinfo.cgi?type=1&amp;host=sbgbdii1.in2p3.fr <br />
*https://cloudmon.egi.eu/nagios/cgi-bin/extinfo.cgi?type=1&amp;host=sbgcloud.in2p3.fr<br />
<br />
|-<br />
| style="border-bottom:1px dotted silver;" | NCG-INGRID-PT / LIP <br />
| style="border-bottom:1px dotted silver;" | PT <br />
| style="border-bottom:1px dotted silver;" | Mario David <br />
| style="border-bottom:1px dotted silver;" | Joao Pina, Joao Martins <br />
| style="border-bottom:1px dotted silver;" | NCG-INGRID-PT <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Openstack Mitaka<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
- 80 Cores with 192 GB RAM <br />
<br />
- 3 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | Name m1.xlarge, VCPUs 8, Root Disk 160 GB, Ephemeral Disk 0 GB, Total Disk 160 GB, RAM 16,384 MB<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Polythecnic University of Valencia, I3M <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Ignacio Blanquer <br />
| style="border-bottom:1px dotted silver;" | Carlos de Alfonso <br />
| style="border-bottom:1px dotted silver;" | UPV-GRyCAP <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://fc-one.i3m.upv.es:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 128 Cores with 192 GB RAM <br />
<br />
- 5 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 4 VCPUS, 16GB RAM, 160GB HD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Fraunhofer SCAI <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Horst Schwichtenberg <br />
| style="border-bottom:1px dotted silver;" | Andre Gemuend <br />
| style="border-bottom:1px dotted silver;" | SCAI <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://fc.scai.fraunhofer.de:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 128 physical cores + HT, 244 GB RAM <br />
<br />
- 20 TB Storage (Glance &amp; Cinder) <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 VCPUS, 16GB RAM, 160GB HD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | BELSPO <br />
| style="border-bottom:1px dotted silver;" | BE<br> <br />
| style="border-bottom:1px dotted silver;" | Stephane GERARD <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | BEgrid-BELNET <br />
| style="border-bottom:1px dotted silver;" | Certified (2016-12) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4 <br />
| style="border-bottom:1px dotted silver;" | YES<br> <br />
| style="border-bottom:1px dotted silver;" | [https://rocci.iihe.ac.be:11443/ OCCI]<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
- 160 physical cores + HT, 512 GB RAM <br />
<br />
- 10 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 16 VCPUs, 32GB of RAM, local HD 200 GB<br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Jan Astalos, Miroslav Dobrucky <br />
| style="border-bottom:1px dotted silver;" | IISAS-Nebula <br />
| style="border-bottom:1px dotted silver;" | Certified (2017-01-25) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | [https://nebula2.ui.savba.sk:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | 32 Cores with 4GB RAM per core and 4 GPUs K20m - Storage 147GB <br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 14 CPU cores, 2GPU, 56GB of RAM, 830GB HD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | IFIN-HH<br />
| style="border-bottom:1px dotted silver;" | RO <br />
| style="border-bottom:1px dotted silver;" | Ionut Vasile<br />
| style="border-bottom:1px dotted silver;" | Dragos Ciobanu-Zabet<br />
| style="border-bottom:1px dotted silver;" | CLOUDIFIN <br />
| style="border-bottom:1px dotted silver;" | Certified (2017-03-01) <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka<br />
| style="border-bottom:1px dotted silver;" | Yes<br> <br />
| style="border-bottom:1px dotted silver;" | [http://cloud-ctrl.nipne.ro:8787/occi1.1 OCCI]<br />
| style="border-bottom:1px dotted silver;" | 96 Cores with 384 GB RAM <br>- 2TB Storage<br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.large(VCPUs 4, 8GB RAM, Root Disk 80GB)<br> m1.medium(VCPUs 2, 4GB RAM, Root Disk 40GB)<br> m1.nano(VCPUs 1, 64MB RAM)<br> m1.small(VCPUs 1, 2GB RAM, Root Disk 20GB)<br> m1.tiny(VCPUs 1, 512MB RAM, Root Disk 1GB)<br> m1.xlarge(VCPUs 8, 16GB RAM, Root Disk 160GB)<br />
|}<br />
<br />
== Integrating resource providers ==<br />
<br />
Last update: may 2015 <br />
<br />
Sites that have a valid GOCDB entry, should also have at least one service type listed and monitored via cloudmon.egi.eu. <br />
<br />
{| class="wikitable sortable" style="border:1px solid black; text-align:left;" cellspacing="0" cellpadding="5"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | Affiliation <br />
! style="border-bottom:1px solid black;" | CC <br />
! style="border-bottom:1px solid black;" | Main Contact <br />
! style="border-bottom:1px solid black;" | Deputies <br />
! style="border-bottom:1px solid black;" | Host Site <br />
! style="border-bottom:1px solid black;" | CMF <br />
! style="border-bottom:1px solid black;" | Certification <br />
! style="border-bottom:1px solid black;" | Resource Endpoints <br />
! style="border-bottom:1px solid black;" | Comment<br />
|-<br />
| style="border-bottom:1px dotted silver;" | KISTI <br />
| style="border-bottom:1px dotted silver;" | KR <br />
| style="border-bottom:1px dotted silver;" | Soonwook Hwang <br />
| style="border-bottom:1px dotted silver;" | Sangwan Kim, Taesang Huh, Jae-Hyuck Kwak <br />
| style="border-bottom:1px dotted silver;" | KR-KISTI-CLOUD <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://fccont.kisti.re.kr:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | 64 cores with 256 GB RAM and 6TB HDD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CSC <br />
| style="border-bottom:1px dotted silver;" | FI <br />
| style="border-bottom:1px dotted silver;" | Jura Tarus <br />
| style="border-bottom:1px dotted silver;" | Luís Alves, Ulf Tigerstedt, Kalle Happonen <br />
| style="border-bottom:1px dotted silver;" | CSC-Cloud <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | Status: Testing resource integration<br />
|}<br />
<br />
== Interested resource providers ==<br />
<br />
{| class="wikitable sortable" style="border:1px solid black; text-align:left;" cellspacing="0" cellpadding="5"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | Affiliation <br />
! style="border-bottom:1px solid black;" | CC <br />
! style="border-bottom:1px solid black;" | Representative <br />
! style="border-bottom:1px solid black;" | Deputies <br />
! style="border-bottom:1px solid black;" | CMF <br />
! style="border-bottom:1px solid black;" | Integration plans <br />
! style="border-bottom:1px solid black;" | Comment<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CSIC-EBD <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Jesús Marco <br />
| style="border-bottom:1px dotted silver;" | Fernando Aguilar, Juan Carlos Sexto <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | to integrate before 30th June <br />
| style="border-bottom:1px dotted silver;" | ~1000 cores (500 cores initially available in FedCloud), 1PB of storage (around 50% devoted to support FedCloud)<br />
|-<br />
| style="border-bottom:1px dotted silver;" | IICT-BAS <br />
| style="border-bottom:1px dotted silver;" | BG <br />
| style="border-bottom:1px dotted silver;" | Emanouil Atanassov <br />
| style="border-bottom:1px dotted silver;" | Todor Gurov <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - Napoli <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Silvio Pardi <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - CNAF <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Cristina Aiftimiei <br />
| style="border-bottom:1px dotted silver;" | Davide Salomoni, Diego Michelotto <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - Torino <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Andrea Guarise <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | CIEMAT <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Rubio Montero <br />
| style="border-bottom:1px dotted silver;" | Rafael Mayo García, Manuel Aurelio Rodríguez Pascual <br />
| style="border-bottom:1px dotted silver;" | OpenNebula <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | SURFsara <br />
| style="border-bottom:1px dotted silver;" | NL <br />
| style="border-bottom:1px dotted silver;" | Ron Trompert <br />
| style="border-bottom:1px dotted silver;" | Maurice Bouwhuis, Machiel Jansen <br />
| style="border-bottom:1px dotted silver;" | OpenNebula <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | ISRGrid/IUCC <br />
| style="border-bottom:1px dotted silver;" | IL <br />
| style="border-bottom:1px dotted silver;" | Yossi Baruch <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | DESY <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Patrick Furhmann <br />
| style="border-bottom:1px dotted silver;" | Paul Millar <br />
| style="border-bottom:1px dotted silver;" | dCache <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | Storage only<br />
|-<br />
| style="border-bottom:1px dotted silver;" | STFC/RAL <br />
| style="border-bottom:1px dotted silver;" | UK <br />
| style="border-bottom:1px dotted silver;" | Ian Collier <br />
| style="border-bottom:1px dotted silver;" | Frazer Barnsley, Alan Kyffin <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | STFC/RAL Harwell Science <br />
| style="border-bottom:1px dotted silver;" | UK <br />
| style="border-bottom:1px dotted silver;" | Jens Jensen <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | Castor <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | Cloud storage only<br />
|-<br />
| style="border-bottom:1px dotted silver;" | IFAE / PIC <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Victor Mendez <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | SAGrid <br />
| style="border-bottom:1px dotted silver;" | ZA <br />
| style="border-bottom:1px dotted silver;" | Bruce Becker <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | SRCE <br />
| style="border-bottom:1px dotted silver;" | HR <br />
| style="border-bottom:1px dotted silver;" | Emir Imamagic <br />
| style="border-bottom:1px dotted silver;" | Luko Gjenero <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | Planned - April 2014 <br />
| style="border-bottom:1px dotted silver;" | Status: Deploying OpenStack cluster, investigating storage options<br />
|-<br />
| style="border-bottom:1px dotted silver;" | GridPP <br />
| style="border-bottom:1px dotted silver;" | UK <br />
| style="border-bottom:1px dotted silver;" | Adam Huffman <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | Hosted at Imperial College<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CRNS/IN2P3-LAL <br />
| style="border-bottom:1px dotted silver;" | FR <br />
| style="border-bottom:1px dotted silver;" | Michel Jouvin <br />
| style="border-bottom:1px dotted silver;" | Mohammed Araj <br />
| style="border-bottom:1px dotted silver;" | StratusLab <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|}<br />
<br />
[[Category:Federated_Cloud]]</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Federated_Cloud_infrastructure_status&diff=96476
Federated Cloud infrastructure status
2017-08-03T12:26:55Z
<p>Verlato: </p>
<hr />
<div>{{Fedcloud_Menu}} {{TOC_right}} <br />
<br />
The purposes of this page are <br />
<br />
*providing a snapshot of the resources that are provided by the Federated Cloud infrastructure <br />
*providing information about the sites that are joining, or have expressed interest in joining the FedCloud <br />
*providing the list of sites supporting the fedcloud.egi.eu VO, which is the VO used to allow the evaluation of the FedCloud infrastructure by a given provider<br />
<br />
If you have any comments on the content of this page, please contact '''operations @ egi.eu'''. <br />
<br />
== Status of the Federated Cloud ==<br />
<br />
The table here shows all Resource Centres fully integrated into the Federated Cloud infrastructure and certified through the [[PROC09|EGI Resource Centre Registration and Certification]]. <br />
<br />
The status of all the services is monitored via the [https://argo-mon.egi.eu/nagios/ argo-mon.egi.eu nagios instance] and [https://argo-mon2.egi.eu/nagios/ argo-mon2.egi.eu nagios instance]. <br />
<br />
Details on Resource Centres availability and reliability are available on [http://argo.egi.eu/lavoisier/cloud_reports?accept=html ARGO]. <br />
<br />
Accounting data are available on the [http://accounting.egi.eu/cloud.php EGI Accounting Portal] or in the [http://accounting-devel.egi.eu/cloud.php accounting portal dev instance]. <br />
<br />
'''Last update: February 2017''' <br />
<br />
{| style="border:1px solid black; text-align:left;" class="wikitable sortable" cellspacing="0" cellpadding="5"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | Affiliation <br />
! style="border-bottom:1px solid black;" | CC <br />
! style="border-bottom:1px solid black;" | Main contact <br />
! style="border-bottom:1px solid black;" | Deputies <br />
! style="border-bottom:1px solid black;" | Host Site <br />
! style="border-bottom:1px solid black;" | Production status <br />
! style="border-bottom:1px solid black;" | CMF Version <br />
! style="border-bottom:1px solid black;" | Supporting fedcloud.egi.eu <br />
! style="border-bottom:1px solid black;" | Resource Endpoints <br />
! style="border-bottom:1px solid black;" | Committed Production Launch resources <br />
! style="border-bottom:1px solid black;" | VM maximum size (memory, cpu, storage)<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
100 Percent IT Ltd <br />
<br />
| style="border-bottom:1px dotted silver;" | UK <br />
| style="border-bottom:1px dotted silver;" | David Blundell <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | 100IT <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka&nbsp; <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://occi-api.100percentit.com:8787/occi1.1 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 120 Cores with 128 GB RAM <br />
<br />
- 16TB Shared storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
UNIZAR / BIFI <br />
<br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Ruben Valles <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | BIFI <br />
| style="border-bottom:1px dotted silver;" | Suspended (2017-08-01) <br />
| style="border-bottom:1px dotted silver;" | OpenStack (Newton / Icehouse) <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | <br />
[http://server4-ciencias.bifi.unizar.es:8787 OCCI] (Newton)<br> [http://server4-eupt.unizar.es:8787 OCCI] (Icehouse) <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
- 720 Cores with 740 GB RAM <br />
<br />
- 10 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
<br />
'''Newton''': m1.xlarge, VCPUs 8, Root Disk 20 GB, Ephemeral Disk 0 GB, Total Disk 20 GB, RAM 16,384 MB<br />
'''Icehouse''': Name m1.xlarge, VCPUs 8, Root Disk 160 GB, Ephemeral Disk 0 GB, Total Disk 160 GB, RAM 16,384 MB <br />
<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Cyfronet (NGI PL) <br />
| style="border-bottom:1px dotted silver;" | PL <br />
| style="border-bottom:1px dotted silver;" | Tomasz Szepieniec <br />
| style="border-bottom:1px dotted silver;" | Patryk Lasoń, Łukasz Flis <br />
| style="border-bottom:1px dotted silver;" | [https://next.gocdb.eu/portal/index.php?Page_Type=Site&id=966 CYFRONET-CLOUD] <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Juno <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://control.cloud.cyfronet.pl:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 200 Cores with 400 GB RAM <br />
<br />
- 5 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 cores, 16 GB of RAM, 10 GB<br />
|-<br />
| style="border-bottom:1px dotted silver;" | FCTSG <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Rubén Díez <br />
| style="border-bottom:1px dotted silver;" | Iván Díaz <br />
| style="border-bottom:1px dotted silver;" | CESGA <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://cloud.cesga.es:3202/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | - 288 Cores with 592GB RAM (36 Xeon servers with 8 cores, 16GB RAM and local HD 500GB). <br />
- 160 Cores with 512GB RAM (8 Xeon servers with 20 cores, 64GB RAM and local HD 500GB). - Two data stores of 3TB and 700GB. This infrastructure is used to run several core services for EGI.eu and our capacity is compromised due to that. Imminent migration to a new endpoint (OpenNebula 5); this process will be gradual and due this some resources could be not immediately allocatable. <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 cores, 16GB of RAM, local HD 470 GB<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CESNET <br />
| style="border-bottom:1px dotted silver;" | CZ <br />
| style="border-bottom:1px dotted silver;" | Miroslav Ruda <br />
| style="border-bottom:1px dotted silver;" | Boris Parak <br />
| style="border-bottom:1px dotted silver;" | CESNET-MetaCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://carach5.ics.muni.cz:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 416 Cores with 2.4 TB RAM <br />
<br />
- 56.6 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 32 cores, 185 GB of RAM, approx. 3 TB of attached storage<br />
|-<br />
| style="border-bottom:1px dotted silver;" | FZ Jülich <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Björn Hagemeier <br />
| style="border-bottom:1px dotted silver;" | Shahbaz Memon <br />
| style="border-bottom:1px dotted silver;" | FZJ <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack<br>Kilo <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://egi-cloud.zam.kfa-juelich.de:8787/ OCCI]<br>[https://swift.zam.kfa-juelich.de:8888/ CDMI]<br>[https://fsd-cloud.zam.kfa-juelich.de:5000/v2.0 OpenStack] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 216 Cores with 294 GB RAM <br />
<br />
- 50 TB Storage (~20TB object + ~30TB block) <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
Name:&nbsp;m16<br>16 Cores<br>16 GB RAM<br>Default Disk:&nbsp;20 (Root) + 20 (Ephemeral)<br>1TB attachable block storage<br>Open Ports: 22, 80, 443, 7000-7020 <br />
<br />
|-<br />
| style="border-bottom:1px dotted silver;" | GWDG <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Piotr Kasprzak <br />
| style="border-bottom:1px dotted silver;" | Ramin Yahyapour, Philipp Wieder <br />
| style="border-bottom:1px dotted silver;" | GoeGrid <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4 <br />
| style="border-bottom:1px dotted silver;" | NO <br />
| style="border-bottom:1px dotted silver;" | [https://occi.cloud.gwdg.de:3100/ OCCI]<br>[http://cdmi.cloud.gwdg.de:4001 CDMI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 192 Cores with 768 GB RAM <br />
<br />
&nbsp;- 40 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 64 cores, 240 GB RAM, 3 TB disk<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CSIC/IFCA <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Alvaro Lopez Garcia <br />
| style="border-bottom:1px dotted silver;" | Pablo Orviz <br />
| style="border-bottom:1px dotted silver;" | IFCA-LCG2 <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | <br />
[https://cloud.ifca.es:8787/ OCCI] <br />
<br />
[https://cloud.ifca.es:8774/ OpenStack] <br />
<br />
[https://cephrgw.ifca.es:8080/ Swift] <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
- 2368 Cores ( 32 nodes x 8 vcpus x 16GB RAM -&nbsp; 36 nodes x 24 vcpus x 48GM RAM - 34 nodes x 32 vcpus x 128GB RAM - 2 nodes x 80 vcpus x 1TB RAM )<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
cm4.4xlarge: 32 vCPUs, 160GB HD, 64GB RAM <br />
<br />
x1.20xlarge: 80 vCPUS, 100GB HD, 1TB RAM (upon request) <br />
<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br />
GRNET<br><br />
<br />
| style="border-bottom:1px dotted silver;" | GR <br />
| style="border-bottom:1px dotted silver;" | Kostas Koumantaros <br />
| style="border-bottom:1px dotted silver;" | &nbsp; Kyriakos Ginis <br />
| style="border-bottom:1px dotted silver;" | HG-09-Okeanos-Cloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | Synnefo <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://okeanos-occi2.hellasgrid.gr:9000/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 70 CPUs with 220 GB RAM <br />
<br />
<br> - 2 ΤΒ storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 24 Cores, 192GB RAM, 500GB HD&nbsp;<br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Miroslav Dobrucky, Jan Astalos <br />
| style="border-bottom:1px dotted silver;" | IISAS-FedCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nova.ui.savba.sk:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 168 Cores with 1.5 GB RAM per core <br />
<br />
&nbsp;- 9 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.xlarge: 8 VCPUs, 16GB RAM, 160GB HD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Jan Astalos, Miroslav Dobrucky <br />
| style="border-bottom:1px dotted silver;" | IISAS-GPUCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Liberty <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nova3.ui.savba.sk:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 48 Cores with 4GB RAM per core, 6 GPUs K20m <br />
<br />
&nbsp;- 6 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <span style="color: rgb(51, 51, 51); font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 13px; line-height: 18.5714px; background-color: rgb(245, 245, 245);">gpu2cpu12</span><span style="font-size: 13.28px; line-height: 1.5em;">: 12 VCPUs, 48GB RAM, 200GB HD, 2GPU Tesla K20m</span><br />
|-<br />
| style="border-bottom:1px dotted silver;" rowspan="3" | INFN - Catania <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | IT <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | Roberto Barbera <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | Giuseppe La Rocca, Diego Scardac, Giuseppe Platania <br />
| style="border-bottom:1px dotted silver;" | INFN-CATANIA-NEBULA <br />
| style="border-bottom:1px dotted silver;" | Suspended (2017-04-27) https://ggus.eu/index.php?mode=ticket_info&amp;ticket_id=126716 <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nebula-server-01.ct.infn.it:9000/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 16 cores, 64 GB RAM <br />
<br />
- 5.4 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN-CATANIA-STACK <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://stack-server-01.ct.infn.it:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 16 Cores with 65 GB RAM <br />
<br />
- 16 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | VM maximum size = m1.xlarge (16384 MB, 8 VCPU, 160 GB)<br />
|-<br />
| style="border-bottom:1px dotted silver;" | INDIGO-CATANIA-STACK <br />
| style="border-bottom:1px dotted silver;" | Closed (2017-05-02) <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" |<br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" |<br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - Bari <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Marica Antonacci <br />
| style="border-bottom:1px dotted silver;" | Giacinto Donvito, Stefano Nicotri, Vincenzo Spinoso <br />
| style="border-bottom:1px dotted silver;" | RECAS-BARI (was:PRISMA-INFN-BARI) <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [http://cloud.recas.ba.infn.it:8787/occi OCCI] (no CDMI endpoint provided at the moment, it will be back in the near future) <br />
| style="border-bottom:1px dotted silver;" | <br />
- 300 Cores with 600GB of RAM <br />
<br />
- 50 TB Storage. <br />
<br />
| style="border-bottom:1px dotted silver;" | Flavor "m1.xxlarge": 24 cores, 48GB RAM, 100 GB disk. Up to 500GB block storage can be attached on-demand.<br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN-PADOVA-STACK <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Marco Verlato <br />
| style="border-bottom:1px dotted silver;" | Federica Fanzago, Matteo Segatta <br />
| style="border-bottom:1px dotted silver;" | [https://next.gocdb.eu/portal/index.php?Page_Type=Site&id=1024 INFN-PADOVA-STACK] <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Newton <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://egi-cloud.pd.infn.it:8787/occi1.1/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 120 Cores with 240 GB RAM <br />
<br />
- 2.2TB of overall block storage and 1.8TB of ephemeral storage per compute node <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
m1.hpc: 24 VCPUs, 46GB RAM, 160GB HD, up to 1TB attachable block storage <br />
<br />
Up to 24 public IPs <br />
<br />
|-<br />
| style="border-bottom:1px dotted silver;" | TUBITAK ULAKBIM <br />
| style="border-bottom:1px dotted silver;" | TR <br />
| style="border-bottom:1px dotted silver;" | Feyza Eryol <br />
| style="border-bottom:1px dotted silver;" | Onur Temizsoylu <br />
| style="border-bottom:1px dotted silver;" | TR-FC1-ULAKBIM <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [http://fcctrl.ulakbim.gov.tr:8787/occi1.2 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 168 Cores with 896 GB RAM&nbsp; <br />
<br />
- 40 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 4 CPUs, 16GB Memory, 40GB disk<br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | UKIM <br />
| style="border-bottom:1px dotted silver;" | MK <br />
| style="border-bottom:1px dotted silver;" | Boro Jakimovski <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | MK-04-FINKICLOUD <br />
| style="border-bottom:1px dotted silver;" | Suspended (2016-10-20) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5.2<br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nebula.finki.ukim.mk:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 100 Cores with 24 GB RAM - 1 TB Storage <br />
<br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 4 CPUs, 8GB Memory, 200GB disk<br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | CETA-CIEMAT <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Guillermo Díaz Herrero, <br />
| style="border-bottom:1px dotted silver;" | <br />
Alfonso Pardo Diaz <br />
<br />
Abel Francisco Paz<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | CETA-GRID <br />
| style="border-bottom:1px dotted silver;" | Cloud resources dismissed (2017-06-09)<br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" |<br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | IPHC <br />
| style="border-bottom:1px dotted silver;" | FR <br />
| style="border-bottom:1px dotted silver;" | Jerome Pansanel<br />
| style="border-bottom:1px dotted silver;" | Sebastien Geiger<br />
| style="border-bottom:1px dotted silver;" | IN2P3-IRES <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://sbgcloud.in2p3.fr:8787 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 480 Cores with 2336 GB RAM <br />
- 5 TB storage (cinder)<br />
- 40 TB storage (irods)<br />
<br />
| style="border-bottom:1px dotted silver;" | m1.12xlarge-hugemem (CPU: 48, RAM: 512 GB, disk: 160 GB) <br> Monitoring&nbsp;: <br />
*https://cloudmon.egi.eu/nagios/cgi-bin/extinfo.cgi?type=1&amp;host=sbgbdii1.in2p3.fr <br />
*https://cloudmon.egi.eu/nagios/cgi-bin/extinfo.cgi?type=1&amp;host=sbgcloud.in2p3.fr<br />
<br />
|-<br />
| style="border-bottom:1px dotted silver;" | NCG-INGRID-PT / LIP <br />
| style="border-bottom:1px dotted silver;" | PT <br />
| style="border-bottom:1px dotted silver;" | Mario David <br />
| style="border-bottom:1px dotted silver;" | Joao Pina, Joao Martins <br />
| style="border-bottom:1px dotted silver;" | NCG-INGRID-PT <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Openstack Mitaka<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
- 80 Cores with 192 GB RAM <br />
<br />
- 3 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | Name m1.xlarge, VCPUs 8, Root Disk 160 GB, Ephemeral Disk 0 GB, Total Disk 160 GB, RAM 16,384 MB<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Polythecnic University of Valencia, I3M <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Ignacio Blanquer <br />
| style="border-bottom:1px dotted silver;" | Carlos de Alfonso <br />
| style="border-bottom:1px dotted silver;" | UPV-GRyCAP <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://fc-one.i3m.upv.es:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 128 Cores with 192 GB RAM <br />
<br />
- 5 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 4 VCPUS, 16GB RAM, 160GB HD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Fraunhofer SCAI <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Horst Schwichtenberg <br />
| style="border-bottom:1px dotted silver;" | Andre Gemuend <br />
| style="border-bottom:1px dotted silver;" | SCAI <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://fc.scai.fraunhofer.de:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 128 physical cores + HT, 244 GB RAM <br />
<br />
- 20 TB Storage (Glance &amp; Cinder) <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 VCPUS, 16GB RAM, 160GB HD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | BELSPO <br />
| style="border-bottom:1px dotted silver;" | BE<br> <br />
| style="border-bottom:1px dotted silver;" | Stephane GERARD <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | BEgrid-BELNET <br />
| style="border-bottom:1px dotted silver;" | Certified (2016-12) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4 <br />
| style="border-bottom:1px dotted silver;" | YES<br> <br />
| style="border-bottom:1px dotted silver;" | [https://rocci.iihe.ac.be:11443/ OCCI]<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
- 160 physical cores + HT, 512 GB RAM <br />
<br />
- 10 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 16 VCPUs, 32GB of RAM, local HD 200 GB<br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Jan Astalos, Miroslav Dobrucky <br />
| style="border-bottom:1px dotted silver;" | IISAS-Nebula <br />
| style="border-bottom:1px dotted silver;" | Certified (2017-01-25) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | [https://nebula2.ui.savba.sk:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | 32 Cores with 4GB RAM per core and 4 GPUs K20m - Storage 147GB <br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 14 CPU cores, 2GPU, 56GB of RAM, 830GB HD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | IFIN-HH<br />
| style="border-bottom:1px dotted silver;" | RO <br />
| style="border-bottom:1px dotted silver;" | Ionut Vasile<br />
| style="border-bottom:1px dotted silver;" | Dragos Ciobanu-Zabet<br />
| style="border-bottom:1px dotted silver;" | CLOUDIFIN <br />
| style="border-bottom:1px dotted silver;" | Certified (2017-03-01) <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka<br />
| style="border-bottom:1px dotted silver;" | Yes<br> <br />
| style="border-bottom:1px dotted silver;" | [http://cloud-ctrl.nipne.ro:8787/occi1.1 OCCI]<br />
| style="border-bottom:1px dotted silver;" | 96 Cores with 384 GB RAM <br>- 2TB Storage<br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.large(VCPUs 4, 8GB RAM, Root Disk 80GB)<br> m1.medium(VCPUs 2, 4GB RAM, Root Disk 40GB)<br> m1.nano(VCPUs 1, 64MB RAM)<br> m1.small(VCPUs 1, 2GB RAM, Root Disk 20GB)<br> m1.tiny(VCPUs 1, 512MB RAM, Root Disk 1GB)<br> m1.xlarge(VCPUs 8, 16GB RAM, Root Disk 160GB)<br />
|}<br />
<br />
== Integrating resource providers ==<br />
<br />
Last update: may 2015 <br />
<br />
Sites that have a valid GOCDB entry, should also have at least one service type listed and monitored via cloudmon.egi.eu. <br />
<br />
{| class="wikitable sortable" style="border:1px solid black; text-align:left;" cellspacing="0" cellpadding="5"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | Affiliation <br />
! style="border-bottom:1px solid black;" | CC <br />
! style="border-bottom:1px solid black;" | Main Contact <br />
! style="border-bottom:1px solid black;" | Deputies <br />
! style="border-bottom:1px solid black;" | Host Site <br />
! style="border-bottom:1px solid black;" | CMF <br />
! style="border-bottom:1px solid black;" | Certification <br />
! style="border-bottom:1px solid black;" | Resource Endpoints <br />
! style="border-bottom:1px solid black;" | Comment<br />
|-<br />
| style="border-bottom:1px dotted silver;" | KISTI <br />
| style="border-bottom:1px dotted silver;" | KR <br />
| style="border-bottom:1px dotted silver;" | Soonwook Hwang <br />
| style="border-bottom:1px dotted silver;" | Sangwan Kim, Taesang Huh, Jae-Hyuck Kwak <br />
| style="border-bottom:1px dotted silver;" | KR-KISTI-CLOUD <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://fccont.kisti.re.kr:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | 64 cores with 256 GB RAM and 6TB HDD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CSC <br />
| style="border-bottom:1px dotted silver;" | FI <br />
| style="border-bottom:1px dotted silver;" | Jura Tarus <br />
| style="border-bottom:1px dotted silver;" | Luís Alves, Ulf Tigerstedt, Kalle Happonen <br />
| style="border-bottom:1px dotted silver;" | CSC-Cloud <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | Status: Testing resource integration<br />
|}<br />
<br />
== Interested resource providers ==<br />
<br />
{| class="wikitable sortable" style="border:1px solid black; text-align:left;" cellspacing="0" cellpadding="5"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | Affiliation <br />
! style="border-bottom:1px solid black;" | CC <br />
! style="border-bottom:1px solid black;" | Representative <br />
! style="border-bottom:1px solid black;" | Deputies <br />
! style="border-bottom:1px solid black;" | CMF <br />
! style="border-bottom:1px solid black;" | Integration plans <br />
! style="border-bottom:1px solid black;" | Comment<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CSIC-EBD <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Jesús Marco <br />
| style="border-bottom:1px dotted silver;" | Fernando Aguilar, Juan Carlos Sexto <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | to integrate before 30th June <br />
| style="border-bottom:1px dotted silver;" | ~1000 cores (500 cores initially available in FedCloud), 1PB of storage (around 50% devoted to support FedCloud)<br />
|-<br />
| style="border-bottom:1px dotted silver;" | IICT-BAS <br />
| style="border-bottom:1px dotted silver;" | BG <br />
| style="border-bottom:1px dotted silver;" | Emanouil Atanassov <br />
| style="border-bottom:1px dotted silver;" | Todor Gurov <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - Napoli <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Silvio Pardi <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - CNAF <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Cristina Aiftimiei <br />
| style="border-bottom:1px dotted silver;" | Davide Salomoni, Diego Michelotto <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - Torino <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Andrea Guarise <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | CIEMAT <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Rubio Montero <br />
| style="border-bottom:1px dotted silver;" | Rafael Mayo García, Manuel Aurelio Rodríguez Pascual <br />
| style="border-bottom:1px dotted silver;" | OpenNebula <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | SURFsara <br />
| style="border-bottom:1px dotted silver;" | NL <br />
| style="border-bottom:1px dotted silver;" | Ron Trompert <br />
| style="border-bottom:1px dotted silver;" | Maurice Bouwhuis, Machiel Jansen <br />
| style="border-bottom:1px dotted silver;" | OpenNebula <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | ISRGrid/IUCC <br />
| style="border-bottom:1px dotted silver;" | IL <br />
| style="border-bottom:1px dotted silver;" | Yossi Baruch <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | DESY <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Patrick Furhmann <br />
| style="border-bottom:1px dotted silver;" | Paul Millar <br />
| style="border-bottom:1px dotted silver;" | dCache <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | Storage only<br />
|-<br />
| style="border-bottom:1px dotted silver;" | STFC/RAL <br />
| style="border-bottom:1px dotted silver;" | UK <br />
| style="border-bottom:1px dotted silver;" | Ian Collier <br />
| style="border-bottom:1px dotted silver;" | Frazer Barnsley, Alan Kyffin <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | STFC/RAL Harwell Science <br />
| style="border-bottom:1px dotted silver;" | UK <br />
| style="border-bottom:1px dotted silver;" | Jens Jensen <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | Castor <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | Cloud storage only<br />
|-<br />
| style="border-bottom:1px dotted silver;" | IFAE / PIC <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Victor Mendez <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | SAGrid <br />
| style="border-bottom:1px dotted silver;" | ZA <br />
| style="border-bottom:1px dotted silver;" | Bruce Becker <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | SRCE <br />
| style="border-bottom:1px dotted silver;" | HR <br />
| style="border-bottom:1px dotted silver;" | Emir Imamagic <br />
| style="border-bottom:1px dotted silver;" | Luko Gjenero <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | Planned - April 2014 <br />
| style="border-bottom:1px dotted silver;" | Status: Deploying OpenStack cluster, investigating storage options<br />
|-<br />
| style="border-bottom:1px dotted silver;" | GridPP <br />
| style="border-bottom:1px dotted silver;" | UK <br />
| style="border-bottom:1px dotted silver;" | Adam Huffman <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | Hosted at Imperial College<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CRNS/IN2P3-LAL <br />
| style="border-bottom:1px dotted silver;" | FR <br />
| style="border-bottom:1px dotted silver;" | Michel Jouvin <br />
| style="border-bottom:1px dotted silver;" | Mohammed Araj <br />
| style="border-bottom:1px dotted silver;" | StratusLab <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|}<br />
<br />
[[Category:Federated_Cloud]]</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Federated_Cloud_siteconf&diff=96263
Federated Cloud siteconf
2017-07-25T17:10:16Z
<p>Verlato: </p>
<hr />
<div>{{Fedcloud_Menu}} {{TOC_right}} <br />
<br />
The main purpose of this page is to collect site-specific configuration parameters of the Federated Cloud sites, allowing comparison among them, identify differences, get parameters for a specific site. <br />
<br />
If you have any comments on the content of this page, please contact '''operations @ egi.eu'''. <br />
<br />
Parameters provided by each site are: <br />
<br />
*'''default network name''', as the name of the network assigned by default when firing up a VM to the site; at the moment, it might be that the network is private, public or not assigned at all; example: ''/network/PRIVATE'' <br />
*'''default network type''', can be ''public'', ''private'', or ''N/A'' (not available) <br />
*'''public network name''': name of the public network to be used; usually this is different from the default network, which is private in most of the cases; example: ''/network/PUBLIC'' <br />
*'''is outgoing connectivity guaranteed by default at start time: '''please say YES if newly started VM provide directly outgoing connection, either through public IP or, if public IP is not assigned at instantiation time, through NAT (private IP enabled to outgoing connection) <br />
*'''port default firewall policy''': default policy available at infrastructure level (firewall); usually it's either "all open" or "all closed" <br />
*'''ports firewall configuration''': port configuration on top of the default firewall policy; so you can specify i.e. which ports are open on the firewall if the default configuration is "all closed"; example: ''22, ICMP open'' <br />
*'''ports default CMF policy''': on OpenStack, it is possible to open/close ports using the OpenStack user interface; these "security groups" feature is an additional firewall feature, independent from the infrastructure (low level) firewall, and can be configured by the user (using the Horizon interface) or by API, or asking for support through the EGI Helpdesk. Example: "all open" or "all closed". <br />
*'''ports policy on CMF''': if ports default CMF policy is "all closed", you may want to specify here if there are exceptions. Example: ssh. <br />
*'''mandatory closed ports''': if there are ports that cannot be opened due to local rules or national regulations or infrastructure constraints. Example: 25 is usually not available for security reasons (used 587 instead). <br />
*'''port configuration requests method''': how the site allows to fulfill port reconfiguration requests. Examples: GGUS, Horizon, other ways. <br />
*'''users requests''': please mention here any special requests come from users in the past and that you have worked in order to make a specific use case run on your site. <br />
*'''comments''': if you have any comments to report here that could help us in improving this page.<br />
<br />
= Site-specific configuration =<br />
<br />
{| style="border:1px solid black; text-align:left;" class="wikitable" cellspacing="0" cellpadding="5"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | <br> <br />
! style="border-bottom:1px solid black;" | default network name <br />
! style="border-bottom:1px solid black;" | default network type <br />
! style="border-bottom:1px solid black;" | public network name <br />
! style="border-bottom:1px solid black;" | is outgoing connectivity guaranteed by default at start time? <br />
! style="border-bottom:1px solid black;" | port default firewall policy <br />
! style="border-bottom:1px solid black;" | ports firewall configuration <br />
! style="border-bottom:1px solid black;" | ports default CMF policy <br />
! style="border-bottom:1px solid black;" | ports policy on CMF <br />
! style="border-bottom:1px solid black;" | mandatory closed ports <br />
! style="border-bottom:1px solid black;" | port configuration requests method <br />
! style="border-bottom:1px solid black;" | users requests <br />
! style="border-bottom:1px solid black;" | comments<br />
|-<br />
| style="border-bottom:1px dotted silver;" | 100IT <br />
| style="border-bottom:1px dotted silver;" | private <br />
| style="border-bottom:1px dotted silver;" | private <br />
| style="border-bottom:1px dotted silver;" | public <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all open <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | all closed <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | none <br />
| style="border-bottom:1px dotted silver;" | OpenStack Horizon, GGUS <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | BEgrid-BELNET <br />
| style="border-bottom:1px dotted silver;" | /network/1 <br />
| style="border-bottom:1px dotted silver;" | public <br />
| style="border-bottom:1px dotted silver;" | /network/1 <br />
| style="border-bottom:1px dotted silver;" | YES<br />
| style="border-bottom:1px dotted silver;" | all closed <br />
| style="border-bottom:1px dotted silver;" | 22, ICMP <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | GGUS ticket <br />
| style="border-bottom:1px dotted silver;" | 80, 8080, 443 <br />
| style="border-bottom:1px dotted silver;" | some users have requested to limit access to their VMs to a given list of source IPs<br />
|-<br />
| style="border-bottom:1px dotted silver;" | BIFI <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all closed <br />
| style="border-bottom:1px dotted silver;" | 22,ICMP open <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | GGUS,email <br />
| style="border-bottom:1px dotted silver;" | 8080, 8081 8888, 9443, 61616 (Training VO) to be opened <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | CESGA <br />
| style="border-bottom:1px dotted silver;" | https://fedcloud-services.egi.cesga.es:11443/network/1<br> <br />
| style="border-bottom:1px dotted silver;" | public <br />
| style="border-bottom:1px dotted silver;" | https://fedcloud-services.egi.cesga.es:11443/network/1<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | NA (no OpenStack)<br> <br />
| style="border-bottom:1px dotted silver;" | NA (no OpenStack)<br> <br />
| style="border-bottom:1px dotted silver;" | none<br> <br />
| style="border-bottom:1px dotted silver;" | GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | Static DHCP server (IP assigned if network contextualization fails)<br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | CESNET-MetaCloud <br />
| style="border-bottom:1px dotted silver;" | https://carach5.ics.muni.cz:11443/network/24<br> <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | default network name<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | 67/udp, 137/udp<br> <br />
| style="border-bottom:1px dotted silver;" | GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | One request to provide a private network.<br> <br />
| style="border-bottom:1px dotted silver;" | As soon as security groups are implemented in OCCI, we will switch to a more restrictive mode where only TCP 22 is open by default. Users will have a self-service control over this via OCCI.<br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | CLOUDIFIN <br />
| style="border-bottom:1px dotted silver;" | /occi1.1/network/500ed7e7-162e-4d97-916e-bc7bc3ab9b41<br> <br />
| style="border-bottom:1px dotted silver;" | private<br> <br />
| style="border-bottom:1px dotted silver;" | /occi1.1/network/PUBLIC<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | As we well know by using occi we can create, destroy VMs, attach link networks.<br>Would it not be possible to access (ssh) VMs with private ip through occi?<br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | CYFRONET-CLOUD <br />
| style="border-bottom:1px dotted silver;" | fedcloud.egi.eu-internal-net<br> <br />
| style="border-bottom:1px dotted silver;" | private<br> <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | FZJ <br />
| style="border-bottom:1px dotted silver;" | /network/PRIVATE<br> <br />
| style="border-bottom:1px dotted silver;" | private<br> <br />
| style="border-bottom:1px dotted silver;" | /network/PUBLIC<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all closed<br> <br />
| style="border-bottom:1px dotted silver;" | 22, 80, 443, 7000-7020 <br> <br />
| style="border-bottom:1px dotted silver;" | all closed<br> <br />
| style="border-bottom:1px dotted silver;" | all closed, except for 22, 80, 443, 7000-7020<br> <br />
| style="border-bottom:1px dotted silver;" | 25<br> <br />
| style="border-bottom:1px dotted silver;" | Openstack Horizon portal, GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | 3306, redirected to 7000; 25 (from the inside), redirected to 587.<br> <br />
| style="border-bottom:1px dotted silver;" | Ports 7000-7020 have been defined by our network security team. We have so far redirected any requests for other ports to this range. There was a debate once when users insisted on port 3306 for MySQL, however we convinced them that their client was flawed by not supporting other ports. In the same way, users expected to be able to send email via port 25, we convinced them that port 587 is intended for that purpose.<br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | GoeGrid <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | HG-09-Okeanos-Cloud <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | IFCA-LCG2 <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | all closed<br> <br />
| style="border-bottom:1px dotted silver;" | ICMP open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | OpenStack Horizon, GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | IISAS-FedCloud <br />
| style="border-bottom:1px dotted silver;" | /occi1.1/network/14bd3bc2-5f1a-4948-b94e-bc95e56122e5<br> <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | /occi1.1/network/14bd3bc2-5f1a-4948-b94e-bc95e56122e5<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | all closed <br />
| style="border-bottom:1px dotted silver;" | 22,ICMP open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Openstack Horizon portal, GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | network connections should be monitored, unusual activities (e.g. very high volumes/frequency connections) should raise alarms<br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | IISAS-Nebula <br />
| style="border-bottom:1px dotted silver;" | https://nebula2.ui.savba.sk:11443/network/1<br> <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | https://nebula2.ui.savba.sk:11443/network/1<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | all closed <br />
| style="border-bottom:1px dotted silver;" | 22, ICMP open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | IISAS-GPUCloud <br />
| style="border-bottom:1px dotted silver;" | https://nova3.ui.savba.sk:8787/occi1.1/network/PUBLIC<br> <br />
| style="border-bottom:1px dotted silver;" | public <br />
| style="border-bottom:1px dotted silver;" | https://nova3.ui.savba.sk:8787/occi1.1/network/PUBLIC<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | all closed <br />
| style="border-bottom:1px dotted silver;" | 22, ICMP open <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Openstack Horizon portal, GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | port 8899 by enmr.eu<br> <br />
| style="border-bottom:1px dotted silver;" | network connections should be monitored, unusual activities (e.g. very high volumes/frequency connections) should raise alarms<br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | IN2P3-IRES <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | private<br> <br />
| style="border-bottom:1px dotted silver;" | PUBLIC<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all closed<br> <br />
| style="border-bottom:1px dotted silver;" | 22/80/443/8080 and ICMP open<br> <br />
| style="border-bottom:1px dotted silver;" | Ports 22/tcp and ICMP open by default. Users have the ability to use additional security group to open other ports.<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | 21, 25<br> <br />
| style="border-bottom:1px dotted silver;" | OpenStack for 80/443/8080, GGUS otherwise<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | user are not allowed to create / modify / delete security groups (in particular in a catch-all VO). Comment from the ticket: There is no name for the default network. In deed, with OpenStack and OOI, private networks does not have default name (like the public one). Each private network has its own ID (it is different for each project / VO.<br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN-CATANIA-STACK <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Horizon Dashboard, GGUS <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN-PADOVA-STACK <br />
| style="border-bottom:1px dotted silver;" | /occi1.1/network/<UUID of the internal project network><br> <br />
| style="border-bottom:1px dotted silver;" | private <br> <br />
| style="border-bottom:1px dotted silver;" | /occi1.1/network/PUBLIC<br> <br />
| style="border-bottom:1px dotted silver;" | YES<br><br />
| style="border-bottom:1px dotted silver;" | all closed<br> <br />
| style="border-bottom:1px dotted silver;" | 22 open<br> <br />
| style="border-bottom:1px dotted silver;" | al closed<br> <br />
| style="border-bottom:1px dotted silver;" | 22 open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | upon request: 8899 (from a given IM/EC3 server), 80 to be negotiated<br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | RECAS-BARI <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Horizon Dashboard, GGUS <br />
| style="border-bottom:1px dotted silver;" | several ports because fedcloud users are currently running different services: web portals and applications (80/8080,443), onedata (9443), hadoop, elasticsearch, etc.<br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | SCAI <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | TR-FC1-ULAKBIM <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all closed <br />
| style="border-bottom:1px dotted silver;" | 22/443 open by default <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | GGUS <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | UPV-GRyCAP <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | NCG-INGRID-PT <br />
| style="border-bottom:1px dotted silver;" | &lt;PROJECTNAME&gt;_private_net<br> <br />
| style="border-bottom:1px dotted silver;" | private<br> <br />
| style="border-bottom:1px dotted silver;" | public_net<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Ports 22/tcp and ICMP open by default. Users have the ability to use additional security group to open other ports.<br> <br />
| style="border-bottom:1px dotted silver;" | Horizon Dashboard, GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Horizon Dashboard, GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|}</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Federated_Cloud_siteconf&diff=96262
Federated Cloud siteconf
2017-07-25T16:11:28Z
<p>Verlato: </p>
<hr />
<div>{{Fedcloud_Menu}} {{TOC_right}} <br />
<br />
The main purpose of this page is to collect site-specific configuration parameters of the Federated Cloud sites, allowing comparison among them, identify differences, get parameters for a specific site. <br />
<br />
If you have any comments on the content of this page, please contact '''operations @ egi.eu'''. <br />
<br />
Parameters provided by each site are: <br />
<br />
*'''default network name''', as the name of the network assigned by default when firing up a VM to the site; at the moment, it might be that the network is private, public or not assigned at all; example: ''/network/PRIVATE'' <br />
*'''default network type''', can be ''public'', ''private'', or ''N/A'' (not available) <br />
*'''public network name''': name of the public network to be used; usually this is different from the default network, which is private in most of the cases; example: ''/network/PUBLIC'' <br />
*'''is outgoing connectivity guaranteed by default at start time: '''please say YES if newly started VM provide directly outgoing connection, either through public IP or, if public IP is not assigned at instantiation time, through NAT (private IP enabled to outgoing connection) <br />
*'''port default firewall policy''': default policy available at infrastructure level (firewall); usually it's either "all open" or "all closed" <br />
*'''ports firewall configuration''': port configuration on top of the default firewall policy; so you can specify i.e. which ports are open on the firewall if the default configuration is "all closed"; example: ''22, ICMP open'' <br />
*'''ports default CMF policy''': on OpenStack, it is possible to open/close ports using the OpenStack user interface; these "security groups" feature is an additional firewall feature, independent from the infrastructure (low level) firewall, and can be configured by the user (using the Horizon interface) or by API, or asking for support through the EGI Helpdesk. Example: "all open" or "all closed". <br />
*'''ports policy on CMF''': if ports default CMF policy is "all closed", you may want to specify here if there are exceptions. Example: ssh. <br />
*'''mandatory closed ports''': if there are ports that cannot be opened due to local rules or national regulations or infrastructure constraints. Example: 25 is usually not available for security reasons (used 587 instead). <br />
*'''port configuration requests method''': how the site allows to fulfill port reconfiguration requests. Examples: GGUS, Horizon, other ways. <br />
*'''users requests''': please mention here any special requests come from users in the past and that you have worked in order to make a specific use case run on your site. <br />
*'''comments''': if you have any comments to report here that could help us in improving this page.<br />
<br />
= Site-specific configuration =<br />
<br />
{| style="border:1px solid black; text-align:left;" class="wikitable" cellspacing="0" cellpadding="5"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | <br> <br />
! style="border-bottom:1px solid black;" | default network name <br />
! style="border-bottom:1px solid black;" | default network type <br />
! style="border-bottom:1px solid black;" | public network name <br />
! style="border-bottom:1px solid black;" | is outgoing connectivity guaranteed by default at start time? <br />
! style="border-bottom:1px solid black;" | port default firewall policy <br />
! style="border-bottom:1px solid black;" | ports firewall configuration <br />
! style="border-bottom:1px solid black;" | ports default CMF policy <br />
! style="border-bottom:1px solid black;" | ports policy on CMF <br />
! style="border-bottom:1px solid black;" | mandatory closed ports <br />
! style="border-bottom:1px solid black;" | port configuration requests method <br />
! style="border-bottom:1px solid black;" | users requests <br />
! style="border-bottom:1px solid black;" | comments<br />
|-<br />
| style="border-bottom:1px dotted silver;" | 100IT <br />
| style="border-bottom:1px dotted silver;" | private <br />
| style="border-bottom:1px dotted silver;" | private <br />
| style="border-bottom:1px dotted silver;" | public <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all open <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | all closed <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | none <br />
| style="border-bottom:1px dotted silver;" | OpenStack Horizon, GGUS <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | BEgrid-BELNET <br />
| style="border-bottom:1px dotted silver;" | /network/1 <br />
| style="border-bottom:1px dotted silver;" | public <br />
| style="border-bottom:1px dotted silver;" | /network/1 <br />
| style="border-bottom:1px dotted silver;" | YES<br />
| style="border-bottom:1px dotted silver;" | all closed <br />
| style="border-bottom:1px dotted silver;" | 22, ICMP <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | GGUS ticket <br />
| style="border-bottom:1px dotted silver;" | 80, 8080, 443 <br />
| style="border-bottom:1px dotted silver;" | some users have requested to limit access to their VMs to a given list of source IPs<br />
|-<br />
| style="border-bottom:1px dotted silver;" | BIFI <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all closed <br />
| style="border-bottom:1px dotted silver;" | 22,ICMP open <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | GGUS,email <br />
| style="border-bottom:1px dotted silver;" | 8080, 8081 8888, 9443, 61616 (Training VO) to be opened <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | CESGA <br />
| style="border-bottom:1px dotted silver;" | https://fedcloud-services.egi.cesga.es:11443/network/1<br> <br />
| style="border-bottom:1px dotted silver;" | public <br />
| style="border-bottom:1px dotted silver;" | https://fedcloud-services.egi.cesga.es:11443/network/1<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | NA (no OpenStack)<br> <br />
| style="border-bottom:1px dotted silver;" | NA (no OpenStack)<br> <br />
| style="border-bottom:1px dotted silver;" | none<br> <br />
| style="border-bottom:1px dotted silver;" | GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | Static DHCP server (IP assigned if network contextualization fails)<br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | CESNET-MetaCloud <br />
| style="border-bottom:1px dotted silver;" | https://carach5.ics.muni.cz:11443/network/24<br> <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | default network name<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | 67/udp, 137/udp<br> <br />
| style="border-bottom:1px dotted silver;" | GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | One request to provide a private network.<br> <br />
| style="border-bottom:1px dotted silver;" | As soon as security groups are implemented in OCCI, we will switch to a more restrictive mode where only TCP 22 is open by default. Users will have a self-service control over this via OCCI.<br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | CLOUDIFIN <br />
| style="border-bottom:1px dotted silver;" | /occi1.1/network/500ed7e7-162e-4d97-916e-bc7bc3ab9b41<br> <br />
| style="border-bottom:1px dotted silver;" | private<br> <br />
| style="border-bottom:1px dotted silver;" | /occi1.1/network/PUBLIC<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | As we well know by using occi we can create, destroy VMs, attach link networks.<br>Would it not be possible to access (ssh) VMs with private ip through occi?<br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | CYFRONET-CLOUD <br />
| style="border-bottom:1px dotted silver;" | fedcloud.egi.eu-internal-net<br> <br />
| style="border-bottom:1px dotted silver;" | private<br> <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | FZJ <br />
| style="border-bottom:1px dotted silver;" | /network/PRIVATE<br> <br />
| style="border-bottom:1px dotted silver;" | private<br> <br />
| style="border-bottom:1px dotted silver;" | /network/PUBLIC<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all closed<br> <br />
| style="border-bottom:1px dotted silver;" | 22, 80, 443, 7000-7020 <br> <br />
| style="border-bottom:1px dotted silver;" | all closed<br> <br />
| style="border-bottom:1px dotted silver;" | all closed, except for 22, 80, 443, 7000-7020<br> <br />
| style="border-bottom:1px dotted silver;" | 25<br> <br />
| style="border-bottom:1px dotted silver;" | Openstack Horizon portal, GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | 3306, redirected to 7000; 25 (from the inside), redirected to 587.<br> <br />
| style="border-bottom:1px dotted silver;" | Ports 7000-7020 have been defined by our network security team. We have so far redirected any requests for other ports to this range. There was a debate once when users insisted on port 3306 for MySQL, however we convinced them that their client was flawed by not supporting other ports. In the same way, users expected to be able to send email via port 25, we convinced them that port 587 is intended for that purpose.<br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | GoeGrid <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | HG-09-Okeanos-Cloud <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | IFCA-LCG2 <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | all closed<br> <br />
| style="border-bottom:1px dotted silver;" | ICMP open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | OpenStack Horizon, GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | IISAS-FedCloud <br />
| style="border-bottom:1px dotted silver;" | /occi1.1/network/14bd3bc2-5f1a-4948-b94e-bc95e56122e5<br> <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | /occi1.1/network/14bd3bc2-5f1a-4948-b94e-bc95e56122e5<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | all closed <br />
| style="border-bottom:1px dotted silver;" | 22,ICMP open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Openstack Horizon portal, GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | network connections should be monitored, unusual activities (e.g. very high volumes/frequency connections) should raise alarms<br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | IISAS-Nebula <br />
| style="border-bottom:1px dotted silver;" | https://nebula2.ui.savba.sk:11443/network/1<br> <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | https://nebula2.ui.savba.sk:11443/network/1<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | all closed <br />
| style="border-bottom:1px dotted silver;" | 22, ICMP open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | IISAS-GPUCloud <br />
| style="border-bottom:1px dotted silver;" | https://nova3.ui.savba.sk:8787/occi1.1/network/PUBLIC<br> <br />
| style="border-bottom:1px dotted silver;" | public <br />
| style="border-bottom:1px dotted silver;" | https://nova3.ui.savba.sk:8787/occi1.1/network/PUBLIC<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | all closed <br />
| style="border-bottom:1px dotted silver;" | 22, ICMP open <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Openstack Horizon portal, GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | port 8899 by enmr.eu<br> <br />
| style="border-bottom:1px dotted silver;" | network connections should be monitored, unusual activities (e.g. very high volumes/frequency connections) should raise alarms<br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | IN2P3-IRES <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | private<br> <br />
| style="border-bottom:1px dotted silver;" | PUBLIC<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all closed<br> <br />
| style="border-bottom:1px dotted silver;" | 22/80/443/8080 and ICMP open<br> <br />
| style="border-bottom:1px dotted silver;" | Ports 22/tcp and ICMP open by default. Users have the ability to use additional security group to open other ports.<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | 21, 25<br> <br />
| style="border-bottom:1px dotted silver;" | OpenStack for 80/443/8080, GGUS otherwise<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | user are not allowed to create / modify / delete security groups (in particular in a catch-all VO). Comment from the ticket: There is no name for the default network. In deed, with OpenStack and OOI, private networks does not have default name (like the public one). Each private network has its own ID (it is different for each project / VO.<br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN-CATANIA-STACK <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Horizon Dashboard, GGUS <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN-PADOVA-STACK <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | private <br> <br />
| style="border-bottom:1px dotted silver;" | /occi1.1/network/PUBLIC<br> <br />
| style="border-bottom:1px dotted silver;" | YES<br><br />
| style="border-bottom:1px dotted silver;" | all closed<br> <br />
| style="border-bottom:1px dotted silver;" | 22 open<br> <br />
| style="border-bottom:1px dotted silver;" | al closed<br> <br />
| style="border-bottom:1px dotted silver;" | 22 open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | upon request: 8899 (from a given IM/EC3 server), 80 to be negotiated<br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | RECAS-BARI <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Horizon Dashboard, GGUS <br />
| style="border-bottom:1px dotted silver;" | several ports because fedcloud users are currently running different services: web portals and applications (80/8080,443), onedata (9443), hadoop, elasticsearch, etc.<br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | SCAI <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | TR-FC1-ULAKBIM <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all closed <br />
| style="border-bottom:1px dotted silver;" | 22/443 open by default <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | GGUS <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | UPV-GRyCAP <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | NCG-INGRID-PT <br />
| style="border-bottom:1px dotted silver;" | &lt;PROJECTNAME&gt;_private_net<br> <br />
| style="border-bottom:1px dotted silver;" | private<br> <br />
| style="border-bottom:1px dotted silver;" | public_net<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Ports 22/tcp and ICMP open by default. Users have the ability to use additional security group to open other ports.<br> <br />
| style="border-bottom:1px dotted silver;" | Horizon Dashboard, GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Horizon Dashboard, GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|}</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Federated_Cloud_siteconf&diff=96261
Federated Cloud siteconf
2017-07-25T16:01:58Z
<p>Verlato: </p>
<hr />
<div>{{Fedcloud_Menu}} {{TOC_right}} <br />
<br />
The main purpose of this page is to collect site-specific configuration parameters of the Federated Cloud sites, allowing comparison among them, identify differences, get parameters for a specific site. <br />
<br />
If you have any comments on the content of this page, please contact '''operations @ egi.eu'''. <br />
<br />
Parameters provided by each site are: <br />
<br />
*'''default network name''', as the name of the network assigned by default when firing up a VM to the site; at the moment, it might be that the network is private, public or not assigned at all; example: ''/network/PRIVATE'' <br />
*'''default network type''', can be ''public'', ''private'', or ''N/A'' (not available) <br />
*'''public network name''': name of the public network to be used; usually this is different from the default network, which is private in most of the cases; example: ''/network/PUBLIC'' <br />
*'''is outgoing connectivity guaranteed by default at start time: '''please say YES if newly started VM provide directly outgoing connection, either through public IP or, if public IP is not assigned at instantiation time, through NAT (private IP enabled to outgoing connection) <br />
*'''port default firewall policy''': default policy available at infrastructure level (firewall); usually it's either "all open" or "all closed" <br />
*'''ports firewall configuration''': port configuration on top of the default firewall policy; so you can specify i.e. which ports are open on the firewall if the default configuration is "all closed"; example: ''22, ICMP open'' <br />
*'''ports default CMF policy''': on OpenStack, it is possible to open/close ports using the OpenStack user interface; these "security groups" feature is an additional firewall feature, independent from the infrastructure (low level) firewall, and can be configured by the user (using the Horizon interface) or by API, or asking for support through the EGI Helpdesk. Example: "all open" or "all closed". <br />
*'''ports policy on CMF''': if ports default CMF policy is "all closed", you may want to specify here if there are exceptions. Example: ssh. <br />
*'''mandatory closed ports''': if there are ports that cannot be opened due to local rules or national regulations or infrastructure constraints. Example: 25 is usually not available for security reasons (used 587 instead). <br />
*'''port configuration requests method''': how the site allows to fulfill port reconfiguration requests. Examples: GGUS, Horizon, other ways. <br />
*'''users requests''': please mention here any special requests come from users in the past and that you have worked in order to make a specific use case run on your site. <br />
*'''comments''': if you have any comments to report here that could help us in improving this page.<br />
<br />
= Site-specific configuration =<br />
<br />
{| style="border:1px solid black; text-align:left;" class="wikitable" cellspacing="0" cellpadding="5"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | <br> <br />
! style="border-bottom:1px solid black;" | default network name <br />
! style="border-bottom:1px solid black;" | default network type <br />
! style="border-bottom:1px solid black;" | public network name <br />
! style="border-bottom:1px solid black;" | is outgoing connectivity guaranteed by default at start time? <br />
! style="border-bottom:1px solid black;" | port default firewall policy <br />
! style="border-bottom:1px solid black;" | ports firewall configuration <br />
! style="border-bottom:1px solid black;" | ports default CMF policy <br />
! style="border-bottom:1px solid black;" | ports policy on CMF <br />
! style="border-bottom:1px solid black;" | mandatory closed ports <br />
! style="border-bottom:1px solid black;" | port configuration requests method <br />
! style="border-bottom:1px solid black;" | users requests <br />
! style="border-bottom:1px solid black;" | comments<br />
|-<br />
| style="border-bottom:1px dotted silver;" | 100IT <br />
| style="border-bottom:1px dotted silver;" | private <br />
| style="border-bottom:1px dotted silver;" | private <br />
| style="border-bottom:1px dotted silver;" | public <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all open <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | all closed <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | none <br />
| style="border-bottom:1px dotted silver;" | OpenStack Horizon, GGUS <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | BEgrid-BELNET <br />
| style="border-bottom:1px dotted silver;" | /network/1 <br />
| style="border-bottom:1px dotted silver;" | public <br />
| style="border-bottom:1px dotted silver;" | /network/1 <br />
| style="border-bottom:1px dotted silver;" | YES<br />
| style="border-bottom:1px dotted silver;" | all closed <br />
| style="border-bottom:1px dotted silver;" | 22, ICMP <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | GGUS ticket <br />
| style="border-bottom:1px dotted silver;" | 80, 8080, 443 <br />
| style="border-bottom:1px dotted silver;" | some users have requested to limit access to their VMs to a given list of source IPs<br />
|-<br />
| style="border-bottom:1px dotted silver;" | BIFI <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all closed <br />
| style="border-bottom:1px dotted silver;" | 22,ICMP open <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | GGUS,email <br />
| style="border-bottom:1px dotted silver;" | 8080, 8081 8888, 9443, 61616 (Training VO) to be opened <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | CESGA <br />
| style="border-bottom:1px dotted silver;" | https://fedcloud-services.egi.cesga.es:11443/network/1<br> <br />
| style="border-bottom:1px dotted silver;" | public <br />
| style="border-bottom:1px dotted silver;" | https://fedcloud-services.egi.cesga.es:11443/network/1<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | NA (no OpenStack)<br> <br />
| style="border-bottom:1px dotted silver;" | NA (no OpenStack)<br> <br />
| style="border-bottom:1px dotted silver;" | none<br> <br />
| style="border-bottom:1px dotted silver;" | GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | Static DHCP server (IP assigned if network contextualization fails)<br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | CESNET-MetaCloud <br />
| style="border-bottom:1px dotted silver;" | https://carach5.ics.muni.cz:11443/network/24<br> <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | default network name<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | 67/udp, 137/udp<br> <br />
| style="border-bottom:1px dotted silver;" | GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | One request to provide a private network.<br> <br />
| style="border-bottom:1px dotted silver;" | As soon as security groups are implemented in OCCI, we will switch to a more restrictive mode where only TCP 22 is open by default. Users will have a self-service control over this via OCCI.<br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | CLOUDIFIN <br />
| style="border-bottom:1px dotted silver;" | /occi1.1/network/500ed7e7-162e-4d97-916e-bc7bc3ab9b41<br> <br />
| style="border-bottom:1px dotted silver;" | private<br> <br />
| style="border-bottom:1px dotted silver;" | /occi1.1/network/PUBLIC<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | As we well know by using occi we can create, destroy VMs, attach link networks.<br>Would it not be possible to access (ssh) VMs with private ip through occi?<br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | CYFRONET-CLOUD <br />
| style="border-bottom:1px dotted silver;" | fedcloud.egi.eu-internal-net<br> <br />
| style="border-bottom:1px dotted silver;" | private<br> <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | FZJ <br />
| style="border-bottom:1px dotted silver;" | /network/PRIVATE<br> <br />
| style="border-bottom:1px dotted silver;" | private<br> <br />
| style="border-bottom:1px dotted silver;" | /network/PUBLIC<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all closed<br> <br />
| style="border-bottom:1px dotted silver;" | 22, 80, 443, 7000-7020 <br> <br />
| style="border-bottom:1px dotted silver;" | all closed<br> <br />
| style="border-bottom:1px dotted silver;" | all closed, except for 22, 80, 443, 7000-7020<br> <br />
| style="border-bottom:1px dotted silver;" | 25<br> <br />
| style="border-bottom:1px dotted silver;" | Openstack Horizon portal, GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | 3306, redirected to 7000; 25 (from the inside), redirected to 587.<br> <br />
| style="border-bottom:1px dotted silver;" | Ports 7000-7020 have been defined by our network security team. We have so far redirected any requests for other ports to this range. There was a debate once when users insisted on port 3306 for MySQL, however we convinced them that their client was flawed by not supporting other ports. In the same way, users expected to be able to send email via port 25, we convinced them that port 587 is intended for that purpose.<br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | GoeGrid <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | HG-09-Okeanos-Cloud <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | IFCA-LCG2 <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | all closed<br> <br />
| style="border-bottom:1px dotted silver;" | ICMP open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | OpenStack Horizon, GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | IISAS-FedCloud <br />
| style="border-bottom:1px dotted silver;" | /occi1.1/network/14bd3bc2-5f1a-4948-b94e-bc95e56122e5<br> <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | /occi1.1/network/14bd3bc2-5f1a-4948-b94e-bc95e56122e5<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | all closed <br />
| style="border-bottom:1px dotted silver;" | 22,ICMP open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Openstack Horizon portal, GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | network connections should be monitored, unusual activities (e.g. very high volumes/frequency connections) should raise alarms<br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | IISAS-Nebula <br />
| style="border-bottom:1px dotted silver;" | https://nebula2.ui.savba.sk:11443/network/1<br> <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | https://nebula2.ui.savba.sk:11443/network/1<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | all closed <br />
| style="border-bottom:1px dotted silver;" | 22, ICMP open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | IISAS-GPUCloud <br />
| style="border-bottom:1px dotted silver;" | https://nova3.ui.savba.sk:8787/occi1.1/network/PUBLIC<br> <br />
| style="border-bottom:1px dotted silver;" | public <br />
| style="border-bottom:1px dotted silver;" | https://nova3.ui.savba.sk:8787/occi1.1/network/PUBLIC<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | all closed <br />
| style="border-bottom:1px dotted silver;" | 22, ICMP open <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Openstack Horizon portal, GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | port 8899 by enmr.eu<br> <br />
| style="border-bottom:1px dotted silver;" | network connections should be monitored, unusual activities (e.g. very high volumes/frequency connections) should raise alarms<br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | IN2P3-IRES <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | private<br> <br />
| style="border-bottom:1px dotted silver;" | PUBLIC<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all closed<br> <br />
| style="border-bottom:1px dotted silver;" | 22/80/443/8080 and ICMP open<br> <br />
| style="border-bottom:1px dotted silver;" | Ports 22/tcp and ICMP open by default. Users have the ability to use additional security group to open other ports.<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | 21, 25<br> <br />
| style="border-bottom:1px dotted silver;" | OpenStack for 80/443/8080, GGUS otherwise<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | user are not allowed to create / modify / delete security groups (in particular in a catch-all VO). Comment from the ticket: There is no name for the default network. In deed, with OpenStack and OOI, private networks does not have default name (like the public one). Each private network has its own ID (it is different for each project / VO.<br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN-CATANIA-STACK <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Horizon Dashboard, GGUS <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN-PADOVA-STACK <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | private <br> <br />
| style="border-bottom:1px dotted silver;" | https://egi-cloud.pd.infn.it:8787/occi1.1/network/PUBLIC<br> <br />
| style="border-bottom:1px dotted silver;" | YES<br><br />
| style="border-bottom:1px dotted silver;" | all closed<br> <br />
| style="border-bottom:1px dotted silver;" | 22 open<br> <br />
| style="border-bottom:1px dotted silver;" | al closed<br> <br />
| style="border-bottom:1px dotted silver;" | 22 open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | upon request: 8899 (from a given IM/EC3 server), 80 to be negotiated<br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | RECAS-BARI <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | public<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Horizon Dashboard, GGUS <br />
| style="border-bottom:1px dotted silver;" | several ports because fedcloud users are currently running different services: web portals and applications (80/8080,443), onedata (9443), hadoop, elasticsearch, etc.<br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | SCAI <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | TR-FC1-ULAKBIM <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all closed <br />
| style="border-bottom:1px dotted silver;" | 22/443 open by default <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | GGUS <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | UPV-GRyCAP <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | NCG-INGRID-PT <br />
| style="border-bottom:1px dotted silver;" | &lt;PROJECTNAME&gt;_private_net<br> <br />
| style="border-bottom:1px dotted silver;" | private<br> <br />
| style="border-bottom:1px dotted silver;" | public_net<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | all open<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Ports 22/tcp and ICMP open by default. Users have the ability to use additional security group to open other ports.<br> <br />
| style="border-bottom:1px dotted silver;" | Horizon Dashboard, GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Horizon Dashboard, GGUS<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br><br />
|}</div>
Verlato
https://wiki.egi.eu/w/index.php?title=EGI-Engage:AMB-2017-06-14&diff=95302
EGI-Engage:AMB-2017-06-14
2017-06-10T14:58:20Z
<p>Verlato: /* Task SA2.5 MoBrain */</p>
<hr />
<div>{{Template:EGI-Engage menubar}} {{TOC_right}} <br />
<br />
= Activity Reports =<br />
<br />
== WP1 (NA1) (Yannick Legre) ==<br />
<br />
[[EGI-Engage:Dissemination|Dissemination reports]] <br />
<br />
=== Milestones &amp; Deliverables ===<br />
<br />
=== Task NA1.1 Administrative and Financial Management ===<br />
<br />
=== Task NA1.2 Technical Management ===<br />
<br />
=== Task NA1.3 Quality and Risk Management ===<br />
<br />
<br> <br />
<br />
== WP2 (NA2) (Sergio Andreozzi) ==<br />
<br />
[[EGI-Engage:Dissemination|Dissemination reports]] <br />
<br />
=== Milestones &amp; Deliverables ===<br />
<br />
=== Task NA2.1 Communication and Dissemination ===<br />
<br />
=== Task NA2.2 Strategy, Business Development and Exploitation ===<br />
<br />
=== Task NA2.3 SME/Industry Engagement and Big Data Value Chain ===<br />
<br />
== WP3 (JRA1) (Diego Scardaci) ==<br />
<br />
[[EGI-Engage:Dissemination|Dissemination reports]] <br />
<br />
=== Milestones &amp; Deliverables ===<br />
<br />
=== Task JRA1.1 Authentication and Authorisation Infrastructure ===<br />
<br />
=== Task JRA1.2 Service Registry and Marketplace ===<br />
<br />
=== Task JRA1.3 Accounting ===<br />
<br />
=== Task JRA1.4 Operations Tools ===<br />
<br />
=== Task JRA1.5 Resource Allocation – e-GRANT ===<br />
<br />
== WP4 (JRA2) (Matthew Viljoen) ==<br />
<br />
[[EGI-Engage:Dissemination|Dissemination reports]] <br />
<br />
=== Milestones &amp; Deliverables ===<br />
<br />
=== Task JRA2.1 Federated Open Data ===<br />
<br />
=== Task JRA2.2 Federated Cloud ===<br />
<br />
=== Task JRA2.3 e-Infrastructures Integration ===<br />
<br />
=== Task JRA2.4 Accelerated Computing ===<br />
<br />
== WP5 (SA1) (Peter Solagna) ==<br />
<br />
[[EGI-Engage:Dissemination|Dissemination reports]] <br />
<br />
=== Milestones &amp; Deliverables ===<br />
<br />
=== Task SA1.1 Operations Coordination ===<br />
<br />
=== Task SA1.2 Development of Security Operations ===<br />
<br />
=== Task SA1.3 Integration, Deployment of Grid and Cloud Platforms ===<br />
<br />
== WP6 (SA2) (Gergely Sipos) ==<br />
<br />
[[EGI-Engage:Dissemination|Dissemination reports]] <br />
<br />
=== Milestones &amp; Deliverables ===<br />
<br />
* D6.21 EISCAT Portal second version: Sent to external review on the 8th<br />
<br />
=== Task SA2.1 Training ===<br />
Delivered a webinar:<br />
* How to engage SMEs for NGIs: https://indico.egi.eu/indico/event/3373/<br />
<br />
Preparation for the following f2f events:<br />
EUDAT Summer School, EGI FedCloud theme, FORTH, Heraklion, Crete, Greece, 3-7/7/2017<br />
CODATA-RDA Research Data Science Summer School: http://indico.ictp.it/event/7974 (Contribute with a FedCloud tutorial), Trieste, 10-21/July/2017 (EGI training infrastructure provided to support the school)<br />
<br />
Preparation for the following webinar events:<br />
* Webinar about the Applications On Demand Platform (mainly for NGIs and their USTs): https://indico.egi.eu/indico/event/3378/<br />
* Since the Workload Management System is coming to end-of-life, EGI is trying to push for the adoption of DIRAC. We are evaluation the possibility to have another webinar (depends on DIRAC staffing)<br />
<br />
=== Task SA2.2 Technical User Support ===<br />
* Engagement with Nordic countries at NeIC<br />
* Support for BigDataEurope project<br />
* Presentation at MSO4SC project workshop and sending a proposal for engagement<br />
* Meeting with CESSDA director<br />
* Establishing Engagement WG and arranging first meeting (Jun 9)<br />
* Organising Engagement meeting for NILs (Jun 9)<br />
<br />
=== Task SA2.3 ELIXIR ===<br />
<br />
* EGI Federated Cloud developer from CESNET visited the META-pipe developers in Tromsö, Norway<br />
* Full-scale tests (over 400 cores) of META-pipe started in the local cloud environment of CSC <br />
* The EC project officer approved the effort re-balancing: 2 PM were moved form EBI-EMBL to CNRS and 2 PM to EGI.eu (outside the ELIXIR CC).<br />
* Phenomenal developers are starting to test ELIXIR VO in EGI federated cloud.<br />
<br />
=== Task SA2.4 BBMRI ===<br />
<br />
=== Task SA2.5 MoBrain ===<br />
<br />
Dissemination events:<br />
<br />
* ''Molecular dynamics of proteins in the cloud''. Data Management Services in the Cloud (co-located at the Ninth RDA Plenary), Barcelona, Spain, April 4, 2017<br />
<br />
* ''High-resolution, integrative modelling of biomolecular complexes from fuzzy data''. ELIXIR workshop on “Computational Approaches to the Study of Protein Interaction and Drug Design”, Padova, Italy, April 10-14, 2017<br />
<br />
* ''High-resolution, integrative modelling of biomolecular complexes from fuzzy data''. Understanding Protein Interactions: From Molecules to Organisms. Lyon, France, April 24-26, 2017<br />
<br />
* ''High-resolution, integrative modelling of biomolecular complexes from fuzzy data''. Meeting of the groupe de graphisme et modélisation moléculaire (GGMM). Rheims, France, May 9-11, 2017<br />
<br />
* ''MoBrain - A competence center to serve translational research from molecule to brain'', EGI Conference 2017, Catania, Italy, May 9-12, 2017<br />
<br />
* ''DisVis and PowerFit: Explorative and Integrative Modeling of Biomolecular Complexes harvesting EGI GPGPU resources'', EGI Conference 2017, Catania, Italy, May 9-12, 2017<br />
<br />
* ''ScipionCloud: Large scale cryo electron microscopy image processing on commercial and academic clouds'', EGI Conference 2017, Catania, Italy, May 9-12, 2017<br />
<br />
* ''GPGPU computing support on HTC'', EGI Conference 2017, Catania, Italy, May 9-12, 2017<br />
<br />
* ''HADDOCK workshop in information-driven modelling of biomolecular complexes''. KAUST, Saudi Arabia, May 24-25, 2017<br />
<br />
=== Task SA2.6 DARIAH ===<br />
<br />
=== Task SA2.7 LifeWatch ===<br />
Support to GBIF extended with more Cloud resources.<br />
Development of tools for use of satellite images continued.<br />
Tests of implementation of SDM (MaxEnt) as dockers started.<br />
<br />
=== Task SA2.8 EISCAT_3D ===<br />
<br />
=== Task SA2.9 EPOS ===<br />
<br />
=== Task SA2.10 Disaster Mitigation ===<br />
<br />
<br> <br />
<br />
<br> <br />
<br />
[[Category:EGI-Engage]]</div>
Verlato
https://wiki.egi.eu/w/index.php?title=GPGPU-CREAM&diff=94801
GPGPU-CREAM
2017-05-15T12:31:41Z
<p>Verlato: /* Next steps */</p>
<hr />
<div>{{Template:EGI-Engage menubar}} {{TOC_right}} <br />
= Goal =<br />
* To develop a solution enabling GPU support in CREAM-CE:<br />
# For the most popular LRMSes already supported by CREAM-CE<br />
# Based on [http://www.ogf.org/pipermail/glue-wg/attachments/20140902/de25c686/attachment-0001.doc GLUE 2.1 schema]<br />
<br />
= Work plan =<br />
# Indentifying the relevant GPGPU-related parameters supported by the different LRMS, and abstract them to significant JDL attributes<br />
# GPGPU accounting is expected to be provided by LRMS log files, as done for CPU accounting, and then follows the same APEL flow<br />
# Implementing the needed changes in CREAM-core and BLAH components<br />
# Writing the infoproviders according to GLUE 2.1<br />
# Testing and certification of the prototype<br />
# Releasing a CREAM-CE update with full GPGPU support<br />
<br />
= Testbed =<br />
* 3 nodes (2x Intel Xeon E5-2620v2) with 2 NVIDIA Tesla K20m GPUs per node available at CIRMMP<br />
** MoBrain applications installed: AMBER and GROMACS with CUDA 5.5<br />
** Batch system/Scheduler: Torque 4.2.10 (source compiled with NVML libs)/ Maui 3.3.1 <br />
** EMI3 CREAM-CE<br />
<br />
= Progress =<br />
* '''May 2015'''<br />
** tested local AMBER job submission with pbs_sched as scheduler (i.e. not using Maui) and various [http://docs.adaptivecomputing.com/torque/4-0-2/Content/topics/3-nodes/NVIDIAGPGPUs.htm Torque/NVIDIA GPGPU support options]<br />
<br />
* '''June 2015'''<br />
**added support to JDL attributes '''GPUNumber''' e '''GPUMode''': this required modifications blah_common_submit_functions.sh and server.c <br />
**first implementation of the two new attributes for Torque/pbs_sched:<br />
GPUMode refers to the NVML COMPUTE mode (the gpu_mode variable in the pbsnodes output) and can have the following values for Torque/pbs_sched:<br />
- default - Shared mode available for multiple processes<br />
- exclusive Thread - Only one COMPUTE thread is allowed to run on the GPU (v260 exclusive)<br />
- prohibited - No COMPUTE contexts are allowed to run on the GPU<br />
- exclusive_process - Only one COMPUTE process is allowed to run on the GPU<br />
:this required modifications to pbs_submit.sh<br />
<br />
* '''July-August 2015''' <br />
** implemented the parser on CREAM core for the new JDL attributes GPUNumber and GPUMode<br />
** tested AMBER remote job submission through glite-ce-submit client:<br />
$ glite-ce-job-submit -o jobid.txt -d -a -r cegpu.cerm.unifi.it:8443/cream-pbs-batch test.jdl<br />
$ cat test.jdl<br />
[<br />
executable = "test_gpu.sh";<br />
inputSandbox = { "test_gpu.sh" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out","err.err","min.out","heat.out" };<br />
GPUNumber=2;<br />
GPUMode="exclusive_process";<br />
]<br />
** However, being the user possibility to set the NVML Compute mode not available on all the batch systems, it was decided '''not to support GPUMode JDL attribute''' in the future production release.<br />
* '''September 2015'''<br />
** contacted STFC/Emerald (LSF8 based GPU cluster), IFCA (SGE based GPU cluster) and INFN-CNAF (LSF9 based GPU cluster) if they can provide a testing instance for extending CREAM GPU support to LSF and SGE LRMSes<br />
** continued troubleshooting CREAM prototype with AMBER application at CIRMMP cluster<br />
** analysed the GLUE2.1 schema as a base for writing the GPU-aware Torque info providers<br />
** An [https://indico.egi.eu/indico/getFile.py/access?contribId=5&resId=0&materialId=slides&confId=2380 update report of GPGPU support in CREAM-CE] has been presented at the OMB <br />
* '''October 2015'''<br />
** Enabled Maui scheduler at CIRMMP cluster and started the development of a CREAM prototype with GPGPU support for Torque/Maui. Problem: it seems there is no way to set the NVML Compute mode as is done in Torque/pbs_sched through the qsub -l (see above)<br />
** Troubleshooting of CREAM prototype.<br />
** Derek Ross of STFC/Emerald (LSF8 based GPU cluster) activate an account to access Emerald GPGPU testbed, in order to start GPGPU-enabled CREAM prototyping for LSF.<br />
** A thread on the WP4.4 and VT-GPGPU mailing lists has started with the APEL team for investigating how to address GPGPU accounting for Torque and other LRMSes.<br />
* '''November 2015'''<br />
** Obtained availabilty to testing from ARNES (Slurm based GPU cluster), Queen Mary University of London (QMUL) (SGE based cluster with OpenCL compatible AMD GPUs) and GRIF (HTCondor based GPU cluster)<br />
** [http://indico.cern.ch/event/319753/contribution/8/attachments/1181551/1710838/GDB-04Nov15.pdf GPGPU EGI Activities] presented at WLCG Grid Deployement Board<br />
** Accounting issues further discussed at the [https://indico.egi.eu/indico/sessionDisplay.py?sessionId=47&confId=2544#20151111 EGI CF Accelerated Computing Session] in Bari<br />
** Continued testing and troubleshooting of GPU-enabled CREAM-CE prototype at CIRMMP testbed.<br />
** Implemented another [https://wiki.egi.eu/wiki/Competence_centre_MoBrain#How_to_run_the_DisVis_docker_image_on_the_enmr.eu_VO MoBrain use-case]: the dockerized [http://www.ncbi.nlm.nih.gov/pubmed/26026169 DisVis application] has been properly installed at CIRMMP testbed. MoBrain users (through enmr.eu VO) can now run DisVis exploiting the GPU cluster at CIRMMP via the GPU-enabled CREAM-CE <br />
** Start prepararing the process of certification: investigating the use of [http://www.grycap.upv.es/im/ IM (UPV tool)] for automatically deploying cluster on the EGI FedCloud to be used for the GPU-enabled CREAM-CE certification<br />
* '''December 2015'''<br />
** Coordination with CESNET/MU partner of MoBrain CC in order to:<br />
*** add their GPU nodes to the cloud testbed (i.e. via OCCI, not interested in extending the grid testbed)<br />
*** preparing a GPU-enabled VM image with Gromacs and Amber, and test them in the FedCloud<br />
*** cloning the Gromacs and Amber WeNMR portals and interface it to the testbed above<br />
** managed to use IM tool to deploy on cloud Torque, Slurm, SGE and HTCondor clusters. Done for CentOS6 on servers without GPUs. Plan to do the same on GPU servers available trough the IISAS cloud site.<br />
* '''January 2016'''<br />
** Supporting the MoBrain CC team at CIRMMP to port their AMBER portal to talk with GPGPU-enabled CREAM-CE<br />
** Contacted APEL for GPGPU accounting news: they are producing a short document spelling out a couple of scenarios of how we might proceed with grid and cloud<br />
* '''February 2016'''<br />
** Supporting the MoBrain CC team at CIRMMP to port their AMBER portal to talk with GPGPU-enabled CREAM-CE<br />
** Contributing to deliverables D4.6 and D6.7 and Periodic Report<br />
* '''March 2016'''<br />
** GPGPU-enabled CREAM-CE prototype implemented and tested at GRIF HTCondor based GPU/MIC cluster<br />
** added support to two new JDL attributes '''MICNumber''' and '''GPUModel''', expected to be supported by latest version of HTCondor, Slurm and LSF batch systems.<br />
* '''April 2016'''<br />
** GPGPU-enabled CREAM-CE prototype implemented and tested at ARNES Slurm based GPU cluster and QMUL SGE based GPU cluster<br />
** Task activity presented at EGI Conference 2016<br />
** GLUE2.1 draft updated with relevant Accelerator card specific attributes<br />
* '''May 2016'''<br />
** Participation to GLUE-WG meeting of 17th of May and updating the [https://cernbox.cern.ch/index.php/s/JPGIMJunHMl37Bo GLUE2.1 draft]<br />
** Plans to implement a prototype for the infosys based on this GLUE2.1 draft<br />
** Future official approval of GLUE 2.1 would occur after the specification is revised based on prototype lessons learned<br />
<br />
= Next steps =<br />
* Work for enabling support to accelerated computing in grid enviroment officially stopped at the end of May 2016.<br />
* The CREAM developers team committed to produce a major CREAM-CE release for CentOS7 sysops with the new accelerated computing capabilities by June 2017 (see statement [https://wiki.italiangrid.it/CREAM here]).<br />
<br />
= [https://wiki.egi.eu/wiki/EGI-Engage:TASK_JRA2.4_Accelerated_Computing Back to Accelerated Computing task] =</div>
Verlato
https://wiki.egi.eu/w/index.php?title=GPGPU-FedCloud&diff=94678
GPGPU-FedCloud
2017-05-05T15:44:55Z
<p>Verlato: /* How to create your own GPGPU server in cloud */</p>
<hr />
<div>{{Template:EGI-Engage menubar}} {{TOC_right}} <br />
<br />
= Objective =<br />
<br />
To provide support for accelerated computing in EGI-Engage federated cloud.<br />
<br />
<br />
= Participants =<br />
<br />
Viet Tran (IISAS) viet.tran _at_ savba.sk<br />
<br />
Jan Astalos (IISAS)<br />
<br />
Miroslav Dobrucky (IISAS)<br />
<br />
= Current status =<br />
<br />
Status of OpenNebula site&nbsp;[https://wiki.egi.eu/wiki/GPGPU-OpenNebula wiki.egi.eu/wiki/GPGPU-OpenNebula]<br />
<br />
<br />
<br />
IISAS-GPUCloud site with GPGPU has been established and integrated into EGI federated cloud <br />
<br />
HW configuration: <br />
<br />
6 computing nodes IBM dx360 M4 server with two NVIDIA Tesla K20 accelerators.<br />
Ubuntu 14.04.2 LTS with KVM/QEMU, PCI passthrough virtualization of GPU cards.<br />
<br />
SW configuration: <br />
<br />
Base OS: Ubuntu 14.04.2 LTS<br />
Hypervisor: KVM<br />
Middleware: Openstack Liberty<br />
GPU-enable flavors: gpu1cpu6 (1GPU + 6 CPU cores), gpu2cpu12 (2GPU +12 CPU cores)<br />
<br />
EGI federated cloud configuration: <br />
<br />
GOCDB: IISAS-GPUCloud, https://goc.egi.eu/portal/index.php?Page_Type=Site&amp;id=1485<br />
Monitoring https://cloudmon.egi.eu/nagios/cgi-bin/status.cgi?host=nova3.ui.savba.sk<br />
Openstack endpoint: https://keystone3.ui.savba.sk:5000/v2.0<br />
OCCI endpoint: https://nova3.ui.savba.sk:8787/occi1.1/<br />
Supported VOs: fedcloud.egi.eu, ops, dteam, moldyngrid, enmr.eu, vo.lifewatch.eu, acc-comp.egi.eu<br />
<br />
Applications being tested/running on IISAS-GPUCloud <br />
<br />
MolDynGrid http://moldyngrid.org/<br />
WeNMR https://www.wenmr.eu/<br />
Lifewatch-CC https://wiki.egi.eu/wiki/CC-LifeWatch<br />
<br />
For information and support, please contact us via cloud-admin _at_ savba.sk<br />
<br />
= How to use GPGPU on IISAS-GPUCloud =<br />
For EGI users:<br />
<br />
Join EGI federated cloud https://wiki.egi.eu/wiki/Federated_Cloud_user_support#Quick_Start<br />
<br />
Install your rOCCI client if you don't have it already (in Linux: just single command "curl -L http://go.egi.eu/fedcloud.ui | sudo /bin/bash -" )<br />
<br />
Get VOMS proxy certificate from fedcloud.egi.eu or any supported VO '''with -rfc''' (on rOCCI client: "voms-proxy-init --voms fedcloud.egi.eu -rfc")<br />
<br />
Choose a suitable flavor with GPU (e.g. gpu1cpu6, OCCI users: resource_tpl#f0cd78ab-10a0-4350-a6cb-5f3fdd6e6294)<br />
<br />
Choose a suitable image (e.g. Ubuntu-14.04-UEFI, OCCI users: os_tpl#8fc055c5-eace-4bf2-9f87-100f3026227e)<br />
<br />
Create a keypair for logging in to your server (and stored in tmpfedcloud.login context-file)<br />
(see https://wiki.egi.eu/wiki/Fedcloud-tf:CLI_Environment#How_to_create_a_key_pair_to_access_the_VMs_via_SSH) <br />
<br />
Create a VM with the selected image, flavor and keypair (OCCI users: copy the following very long OCCI command<br />
occi --endpoint https://nova3.ui.savba.sk:8787/occi1.1/ \<br />
--auth x509 --user-cred $X509_USER_PROXY --voms --action create --resource compute \<br />
--mixin os_tpl#8fc055c5-eace-4bf2-9f87-100f3026227e --mixin resource_tpl#f0cd78ab-10a0-4350-a6cb-5f3fdd6e6294 \<br />
--attribute occi.core.title="Testing GPU" \<br />
--context user_data="file://$PWD/tmpfedcloud.login"<br />
remark: check the proper os_tpl-ID by<br />
occi --endpoint https://nova3.ui.savba.sk:8787/occi1.1/ \<br />
--auth x509 --user-cred $X509_USER_PROXY --voms --action describe --resource os_tpl | grep -A1 Ubuntu-14 <br />
<br />
Assign a public (floating) IP to your VM (using VM_ID from previous command and /occi1.1/network/PUBLIC<br />
occi --endpoint https://nova3.ui.savba.sk:8787/occi1.1/ \<br />
--auth x509 --user-cred $X509_USER_PROXY --voms --action link \<br />
--resource https://nova3.ui.savba.sk:8787/occi1.1/compute/$YOUR_VM_ID_HERE -j /occi1.1/network/PUBLIC)<br />
<br />
Log in the VM with your private key and use it as your own GPU server (ssh -i tmpfedcloud cloudadm@$VM_PUBLIC_IP)<br />
Remark: please update the VM-OS immediately: sudo apt-get update && unattended-upgrade; sudo reboot<br />
<br />
Delete your VM to release resources for other users:<br />
occi --endpoint https://nova3.ui.savba.sk:8787/occi1.1/ \<br />
--auth x509 --user-cred $X509_USER_PROXY --voms --action delete \<br />
--resource https://nova3.ui.savba.sk:8787/occi1.1/compute/$YOUR_VM_ID_HERE<br />
<br />
'''Please remember to delete/terminate your servers when you finish your jobs to release resources for other users'''<br />
<br />
<br />
For access to IISAS-GPUCloud via portal:<br />
<br />
Get a token issued by Keystone with VOMS proxy certificate. You can use the tool from https://github.com/tdviet/Keystone-VOMS-client<br />
<br />
Login into Openstack Horizon dashboard with the token via https://horizon.ui.savba.sk/horizon/auth/token/<br />
<br />
Create and manage VMs using the portal.<br />
<br />
'''Note''': All network connections to/from VMs are logged and monitored by IDS.<br />
If users have long computation, please inform us ahead. VMs with longer inactivity will be deleted for releasing resources <br />
The default user account for VM created from Ubuntu-based images via Horizon is "ubuntu". <br />
The default user account for VM created by rOCCI is defined in the context file "tmpfedcloud.login"<br />
<br />
= How to create your own GPGPU server in cloud =<br />
<br />
It is a short instruction to create a GPGPU server in cloud from Ubuntu vanilla image <br />
<br />
Create a VM from vanilla image with UEFI support (e.g. Ubuntu-14.04-UEFI, make sure with flavor with GPU support)<br />
<br />
Install gcc, make and kernel-extra: "apt-get update; apt-get install gcc make linux-image-extra-virtual"<br />
<br />
Choose and download correct driver from http://www.nvidia.com/Download/index.aspx, and upload it to the VM<br />
<br />
Install the NVIDIA driver: "dpkg -i nvidia-driver-local-repo-ubuntu*_amd64.deb" (or "./NVIDIA-Linux-x86_64-*.run" )<br />
<br />
Download CUDA toolkit from https://developer.nvidia.com/cuda-downloads (choose deb format for smaller download)<br />
<br />
Install the CUDA toolkit: "dpkg -i cuda-repo-ubuntu*_amd64.deb; apt-get update; apt-get install cuda" (very large install, 650+ packages, take a long time ~15 minutes)<br />
and set the environment (e.g. "export PATH=/usr/local/cuda-8.0/bin${PATH:+:${PATH}}; export LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64 ${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}" )<br />
<br />
Your server is ready for your application. You can install additional software (NAMD, GROMACS, ...) and your own application now<br />
<br />
<br />
For your convenience, a script is created for installing NVIDIA + CUDA automatically https://github.com/tdviet/NVIDIA_CUDA_installer<br />
<br />
Be sure to make a snapshot of your server for later use. You may need to suspend your server before creating snapshot (due to KVM passthrough). <br />
Do not terminate your server before creating snapshot, whole server will be deleted when terminated<br />
<br />
Other scripts for creating GPGPU servers with NVIDIA + CUDA on the cloud via occi, cloud-init and ansible roles have been developed as result of a collaboration with INDIGO and <br />
West-life, and are available at http://about.west-life.eu/network/west-life/documentation/egi-platforms/accelerated-computing-platforms<br />
<br />
= Verify if CUDA is correctly installed =<br />
<pre>]$ sudo apt-get install cuda-samples-8-0<br />
]$ cd /usr/local/cuda-8.0/samples/1_Utilities/deviceQuery<br />
]$ sudo make<br />
/usr/local/cuda-8.0/bin/nvcc -ccbin g++ -I../../common/inc -m64 -gencode arch=compute_20,code=sm_20 -gencode <br />
[..]<br />
mkdir -p ../../bin/x86_64/linux/release<br />
cp deviceQuery ../../bin/x86_64/linux/release<br />
<br />
]$ ./deviceQuery<br />
./deviceQuery Starting...<br />
<br />
CUDA Device Query (Runtime API) version (CUDART static linking)<br />
<br />
Detected 1 CUDA Capable device(s)<br />
<br />
Device 0: "Tesla K20m"<br />
CUDA Driver Version / Runtime Version 8.0 / 8.0<br />
CUDA Capability Major/Minor version number: 3.5<br />
Total amount of global memory: 4743 MBytes (4972937216 bytes)<br />
(13) Multiprocessors, (192) CUDA Cores/MP: 2496 CUDA Cores<br />
GPU Max Clock rate: 706 MHz (0.71 GHz)<br />
Memory Clock rate: 2600 Mhz<br />
Memory Bus Width: 320-bit<br />
L2 Cache Size: 1310720 bytes<br />
Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)<br />
Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers<br />
Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers<br />
Total amount of constant memory: 65536 bytes<br />
Total amount of shared memory per block: 49152 bytes<br />
Total number of registers available per block: 65536<br />
Warp size: 32<br />
Maximum number of threads per multiprocessor: 2048<br />
Maximum number of threads per block: 1024<br />
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)<br />
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)<br />
Maximum memory pitch: 2147483647 bytes<br />
Texture alignment: 512 bytes<br />
Concurrent copy and kernel execution: Yes with 2 copy engine(s)<br />
Run time limit on kernels: No<br />
Integrated GPU sharing Host Memory: No<br />
Support host page-locked memory mapping: Yes<br />
Alignment requirement for Surfaces: Yes<br />
Device has ECC support: Enabled<br />
Device supports Unified Addressing (UVA): Yes<br />
Device PCI Domain ID / Bus ID / location ID: 0 / 0 / 7<br />
Compute Mode:<br />
&lt; Default (multiple host threads can use&nbsp;::cudaSetDevice() with device simultaneously) &gt;<br />
<br />
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 8.0, CUDA Runtime Version = 8.0, NumDevs = 1, Device0 = Tesla K20m<br />
Result = PASS<br />
</pre><br />
<br />
= How to enable GPGPU passthrough in OpenStack =<br />
For admins of cloud providers<br />
On computing node, get vendor/product ID of your hardware: "lspci | grep NVDIA" to get pci slot of GPU, then "virsh nodedev-dumpxml pci_xxxx_xx_xx_x"<br />
On computing node, unbind device from host kernel driver<br />
On computing node, add "pci_passthrough_whitelist = {"vendor_id":"xxxx","product_id":"xxxx"}" to nova.conf<br />
On controller node, add "pci_alias = {"vendor_id":"xxxx","product_id":"xxxx", "name":"GPU"}" to nova.conf<br />
On controller node, enable PciPassthroughFilter in the scheduler<br />
Create new flavors with "pci_passthrough:alias" (or add key to existing flavor) e.g. nova flavor-key m1.large set "pci_passthrough:alias"="GPU:2"<br />
<br />
= Progress =<br />
<br />
* May 2015<br />
** Review of available technologies<br />
** GPGPU virtualisation in KVM/QEMU<br />
** Performance testing of passthrough<br />
<br />
HW configuration: <br />
IBM dx360 M4 server with two NVIDIA Tesla K20 accelerators.<br />
Ubuntu 14.04.2 LTS with KVM/QEMU, PCI passthrough virtualization of GPU cards.<br />
<br />
Tested application:<br />
NAMD molecular dynamics simulation (CUDA version), STMV test example (http://www.ks.uiuc.edu/Research/namd/).<br />
<br />
Performance results:<br />
Tested application runs 2-3% slower in virtual machine compared to direct run on tested server.<br />
If hyperthreading is enabled on compute server, vCPUs have to be pinned to real cores so that<br />
whole cores will be dedicated to one VM. To avoid potential performance problems, hyperthreading <br />
should be switched off.<br />
<br />
* June 2015<br />
** Creating cloud site with GPGPU support<br />
Configuration: master node, 2 worker nodes (IBM dx360 M4 servers, see above)<br />
Base OS: Ubuntu 14.04.2 LTS<br />
Hypervisor: KVM<br />
Middleware: Openstack Kilo<br />
<br />
* July 2015<br />
** Creating cloud site with GPGPU support<br />
Cloud site created at keystone3.ui.savba.sk, master + two worker nodes, configuration reported above<br />
Creating VM images for GPGPU (based on Ubuntu 14.04, GPU driver and libraries)<br />
<br />
* August 2015<br />
** Testing cloud site with GPGPU support<br />
Performance testing and tuning with GPGPU in Openstack <br />
- comparing performance of cloud-based VM with non-cloud virtualization and physical machine, finding discrepancies and tuning them<br />
- setting CPU flavor in Openstack nova (performance optimization) <br />
- Adjusting Openstack scheduler<br />
<br />
Starting process of integration of the site to EGI FedCloud<br />
- Keystone VOMS support being integrated<br />
- OCCI in preparation, installation planned in September<br />
<br />
* September 2015<br />
Continue integration to EGI-FedCloud<br />
<br />
* October 2015<br />
Full integration to EGI-FedCloud, being in certification process<br />
Support for moldyngrid, enmr.eu and vo.lifewatch.eu VO<br />
<br />
* November 2015<br />
Create new authentication module for logging into Horizon dashboard via keystone token<br />
Various client tools: getting token, installing nvidia+cuda,<br />
Participation on EGI Community Forum v Bari<br />
Site certificated<br />
<br />
* December 2015<br />
User support: adding and testing images from various VOs, solving problems with multiple-VO users<br />
Maintenance: security updates and minor improvements<br />
<br />
* January 2016<br />
Testing + performance tuning OpenCL<br />
Updating images with CUDA<br />
Adding Openstack Ceilometer for betting resource monitoring/accounting<br />
<br />
* February-March 2016<br />
Testing VM migration<br />
Examining GLUE schemes<br />
Examining accounting format and tools<br />
<br />
* April 2016 <br />
Status report presented at EGI Conference 2016<br />
<br />
* May 2016<br />
[https://cernbox.cern.ch/index.php/s/JPGIMJunHMl37Bo GLUE2.1 draft] discussed at GLUE-WG meeting and updated with relevant Accelerator card specific attributes.<br />
GPGPU experimental support enabled on CESNET-Metacloud site. VMs with Tesla M2090 GPU cards tested with DisVis program. <br />
Working on support for GPU with LXC/LXD hypervisor with Openstack, which would provide better performance than KVM.<br />
<br />
* Next steps <br />
Production, application support<br />
Cooperation with APEL team on accounting of GPUs<br />
Generating II according to GLUE 2.1<br />
<br />
= [https://wiki.egi.eu/wiki/EGI-Engage:TASK_JRA2.4_Accelerated_Computing Back to Accelerated Computing task] =</div>
Verlato
https://wiki.egi.eu/w/index.php?title=GPGPU-FedCloud&diff=94677
GPGPU-FedCloud
2017-05-05T15:38:51Z
<p>Verlato: /* How to create your own GPGPU server in cloud */</p>
<hr />
<div>{{Template:EGI-Engage menubar}} {{TOC_right}} <br />
<br />
= Objective =<br />
<br />
To provide support for accelerated computing in EGI-Engage federated cloud.<br />
<br />
<br />
= Participants =<br />
<br />
Viet Tran (IISAS) viet.tran _at_ savba.sk<br />
<br />
Jan Astalos (IISAS)<br />
<br />
Miroslav Dobrucky (IISAS)<br />
<br />
= Current status =<br />
<br />
Status of OpenNebula site&nbsp;[https://wiki.egi.eu/wiki/GPGPU-OpenNebula wiki.egi.eu/wiki/GPGPU-OpenNebula]<br />
<br />
<br />
<br />
IISAS-GPUCloud site with GPGPU has been established and integrated into EGI federated cloud <br />
<br />
HW configuration: <br />
<br />
6 computing nodes IBM dx360 M4 server with two NVIDIA Tesla K20 accelerators.<br />
Ubuntu 14.04.2 LTS with KVM/QEMU, PCI passthrough virtualization of GPU cards.<br />
<br />
SW configuration: <br />
<br />
Base OS: Ubuntu 14.04.2 LTS<br />
Hypervisor: KVM<br />
Middleware: Openstack Liberty<br />
GPU-enable flavors: gpu1cpu6 (1GPU + 6 CPU cores), gpu2cpu12 (2GPU +12 CPU cores)<br />
<br />
EGI federated cloud configuration: <br />
<br />
GOCDB: IISAS-GPUCloud, https://goc.egi.eu/portal/index.php?Page_Type=Site&amp;id=1485<br />
Monitoring https://cloudmon.egi.eu/nagios/cgi-bin/status.cgi?host=nova3.ui.savba.sk<br />
Openstack endpoint: https://keystone3.ui.savba.sk:5000/v2.0<br />
OCCI endpoint: https://nova3.ui.savba.sk:8787/occi1.1/<br />
Supported VOs: fedcloud.egi.eu, ops, dteam, moldyngrid, enmr.eu, vo.lifewatch.eu, acc-comp.egi.eu<br />
<br />
Applications being tested/running on IISAS-GPUCloud <br />
<br />
MolDynGrid http://moldyngrid.org/<br />
WeNMR https://www.wenmr.eu/<br />
Lifewatch-CC https://wiki.egi.eu/wiki/CC-LifeWatch<br />
<br />
For information and support, please contact us via cloud-admin _at_ savba.sk<br />
<br />
= How to use GPGPU on IISAS-GPUCloud =<br />
For EGI users:<br />
<br />
Join EGI federated cloud https://wiki.egi.eu/wiki/Federated_Cloud_user_support#Quick_Start<br />
<br />
Install your rOCCI client if you don't have it already (in Linux: just single command "curl -L http://go.egi.eu/fedcloud.ui | sudo /bin/bash -" )<br />
<br />
Get VOMS proxy certificate from fedcloud.egi.eu or any supported VO '''with -rfc''' (on rOCCI client: "voms-proxy-init --voms fedcloud.egi.eu -rfc")<br />
<br />
Choose a suitable flavor with GPU (e.g. gpu1cpu6, OCCI users: resource_tpl#f0cd78ab-10a0-4350-a6cb-5f3fdd6e6294)<br />
<br />
Choose a suitable image (e.g. Ubuntu-14.04-UEFI, OCCI users: os_tpl#8fc055c5-eace-4bf2-9f87-100f3026227e)<br />
<br />
Create a keypair for logging in to your server (and stored in tmpfedcloud.login context-file)<br />
(see https://wiki.egi.eu/wiki/Fedcloud-tf:CLI_Environment#How_to_create_a_key_pair_to_access_the_VMs_via_SSH) <br />
<br />
Create a VM with the selected image, flavor and keypair (OCCI users: copy the following very long OCCI command<br />
occi --endpoint https://nova3.ui.savba.sk:8787/occi1.1/ \<br />
--auth x509 --user-cred $X509_USER_PROXY --voms --action create --resource compute \<br />
--mixin os_tpl#8fc055c5-eace-4bf2-9f87-100f3026227e --mixin resource_tpl#f0cd78ab-10a0-4350-a6cb-5f3fdd6e6294 \<br />
--attribute occi.core.title="Testing GPU" \<br />
--context user_data="file://$PWD/tmpfedcloud.login"<br />
remark: check the proper os_tpl-ID by<br />
occi --endpoint https://nova3.ui.savba.sk:8787/occi1.1/ \<br />
--auth x509 --user-cred $X509_USER_PROXY --voms --action describe --resource os_tpl | grep -A1 Ubuntu-14 <br />
<br />
Assign a public (floating) IP to your VM (using VM_ID from previous command and /occi1.1/network/PUBLIC<br />
occi --endpoint https://nova3.ui.savba.sk:8787/occi1.1/ \<br />
--auth x509 --user-cred $X509_USER_PROXY --voms --action link \<br />
--resource https://nova3.ui.savba.sk:8787/occi1.1/compute/$YOUR_VM_ID_HERE -j /occi1.1/network/PUBLIC)<br />
<br />
Log in the VM with your private key and use it as your own GPU server (ssh -i tmpfedcloud cloudadm@$VM_PUBLIC_IP)<br />
Remark: please update the VM-OS immediately: sudo apt-get update && unattended-upgrade; sudo reboot<br />
<br />
Delete your VM to release resources for other users:<br />
occi --endpoint https://nova3.ui.savba.sk:8787/occi1.1/ \<br />
--auth x509 --user-cred $X509_USER_PROXY --voms --action delete \<br />
--resource https://nova3.ui.savba.sk:8787/occi1.1/compute/$YOUR_VM_ID_HERE<br />
<br />
'''Please remember to delete/terminate your servers when you finish your jobs to release resources for other users'''<br />
<br />
<br />
For access to IISAS-GPUCloud via portal:<br />
<br />
Get a token issued by Keystone with VOMS proxy certificate. You can use the tool from https://github.com/tdviet/Keystone-VOMS-client<br />
<br />
Login into Openstack Horizon dashboard with the token via https://horizon.ui.savba.sk/horizon/auth/token/<br />
<br />
Create and manage VMs using the portal.<br />
<br />
'''Note''': All network connections to/from VMs are logged and monitored by IDS.<br />
If users have long computation, please inform us ahead. VMs with longer inactivity will be deleted for releasing resources <br />
The default user account for VM created from Ubuntu-based images via Horizon is "ubuntu". <br />
The default user account for VM created by rOCCI is defined in the context file "tmpfedcloud.login"<br />
<br />
= How to create your own GPGPU server in cloud =<br />
<br />
It is a short instruction to create a GPGPU server in cloud from Ubuntu vanilla image <br />
<br />
Create a VM from vanilla image with UEFI support (e.g. Ubuntu-14.04-UEFI, make sure with flavor with GPU support)<br />
<br />
Install gcc, make and kernel-extra: "apt-get update; apt-get install gcc make linux-image-extra-virtual"<br />
<br />
Choose and download correct driver from http://www.nvidia.com/Download/index.aspx, and upload it to the VM<br />
<br />
Install the NVIDIA driver: "dpkg -i nvidia-driver-local-repo-ubuntu*_amd64.deb" (or "./NVIDIA-Linux-x86_64-*.run" )<br />
<br />
Download CUDA toolkit from https://developer.nvidia.com/cuda-downloads (choose deb format for smaller download)<br />
<br />
Install the CUDA toolkit: "dpkg -i cuda-repo-ubuntu*_amd64.deb; apt-get update; apt-get install cuda" (very large install, 650+ packages, take a long time ~15 minutes)<br />
and set the environment (e.g. "export PATH=/usr/local/cuda-8.0/bin${PATH:+:${PATH}}; export LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64 ${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}" )<br />
<br />
Your server is ready for your application. You can install additional software (NAMD, GROMACS, ...) and your own application now<br />
<br />
<br />
For your convenience, a script is created for installing NVIDIA + CUDA automatically https://github.com/tdviet/NVIDIA_CUDA_installer<br />
<br />
Be sure to make a snapshot of your server for later use. You may need to suspend your server before creating snapshot (due to KVM passthrough). <br />
Do not terminate your server before creating snapshot, whole server will be deleted when terminated<br />
<br />
Other scripts for creating GPGPU servers on the cloud via occi and ansible roles have been developed as result of a collaboration with INDIGO and West-life, <br />
and are available at http://about.west-life.eu/network/west-life/documentation/egi-platforms/accelerated-computing-platforms<br />
<br />
= Verify if CUDA is correctly installed =<br />
<pre>]$ sudo apt-get install cuda-samples-8-0<br />
]$ cd /usr/local/cuda-8.0/samples/1_Utilities/deviceQuery<br />
]$ sudo make<br />
/usr/local/cuda-8.0/bin/nvcc -ccbin g++ -I../../common/inc -m64 -gencode arch=compute_20,code=sm_20 -gencode <br />
[..]<br />
mkdir -p ../../bin/x86_64/linux/release<br />
cp deviceQuery ../../bin/x86_64/linux/release<br />
<br />
]$ ./deviceQuery<br />
./deviceQuery Starting...<br />
<br />
CUDA Device Query (Runtime API) version (CUDART static linking)<br />
<br />
Detected 1 CUDA Capable device(s)<br />
<br />
Device 0: "Tesla K20m"<br />
CUDA Driver Version / Runtime Version 8.0 / 8.0<br />
CUDA Capability Major/Minor version number: 3.5<br />
Total amount of global memory: 4743 MBytes (4972937216 bytes)<br />
(13) Multiprocessors, (192) CUDA Cores/MP: 2496 CUDA Cores<br />
GPU Max Clock rate: 706 MHz (0.71 GHz)<br />
Memory Clock rate: 2600 Mhz<br />
Memory Bus Width: 320-bit<br />
L2 Cache Size: 1310720 bytes<br />
Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)<br />
Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers<br />
Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers<br />
Total amount of constant memory: 65536 bytes<br />
Total amount of shared memory per block: 49152 bytes<br />
Total number of registers available per block: 65536<br />
Warp size: 32<br />
Maximum number of threads per multiprocessor: 2048<br />
Maximum number of threads per block: 1024<br />
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)<br />
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)<br />
Maximum memory pitch: 2147483647 bytes<br />
Texture alignment: 512 bytes<br />
Concurrent copy and kernel execution: Yes with 2 copy engine(s)<br />
Run time limit on kernels: No<br />
Integrated GPU sharing Host Memory: No<br />
Support host page-locked memory mapping: Yes<br />
Alignment requirement for Surfaces: Yes<br />
Device has ECC support: Enabled<br />
Device supports Unified Addressing (UVA): Yes<br />
Device PCI Domain ID / Bus ID / location ID: 0 / 0 / 7<br />
Compute Mode:<br />
&lt; Default (multiple host threads can use&nbsp;::cudaSetDevice() with device simultaneously) &gt;<br />
<br />
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 8.0, CUDA Runtime Version = 8.0, NumDevs = 1, Device0 = Tesla K20m<br />
Result = PASS<br />
</pre><br />
<br />
= How to enable GPGPU passthrough in OpenStack =<br />
For admins of cloud providers<br />
On computing node, get vendor/product ID of your hardware: "lspci | grep NVDIA" to get pci slot of GPU, then "virsh nodedev-dumpxml pci_xxxx_xx_xx_x"<br />
On computing node, unbind device from host kernel driver<br />
On computing node, add "pci_passthrough_whitelist = {"vendor_id":"xxxx","product_id":"xxxx"}" to nova.conf<br />
On controller node, add "pci_alias = {"vendor_id":"xxxx","product_id":"xxxx", "name":"GPU"}" to nova.conf<br />
On controller node, enable PciPassthroughFilter in the scheduler<br />
Create new flavors with "pci_passthrough:alias" (or add key to existing flavor) e.g. nova flavor-key m1.large set "pci_passthrough:alias"="GPU:2"<br />
<br />
= Progress =<br />
<br />
* May 2015<br />
** Review of available technologies<br />
** GPGPU virtualisation in KVM/QEMU<br />
** Performance testing of passthrough<br />
<br />
HW configuration: <br />
IBM dx360 M4 server with two NVIDIA Tesla K20 accelerators.<br />
Ubuntu 14.04.2 LTS with KVM/QEMU, PCI passthrough virtualization of GPU cards.<br />
<br />
Tested application:<br />
NAMD molecular dynamics simulation (CUDA version), STMV test example (http://www.ks.uiuc.edu/Research/namd/).<br />
<br />
Performance results:<br />
Tested application runs 2-3% slower in virtual machine compared to direct run on tested server.<br />
If hyperthreading is enabled on compute server, vCPUs have to be pinned to real cores so that<br />
whole cores will be dedicated to one VM. To avoid potential performance problems, hyperthreading <br />
should be switched off.<br />
<br />
* June 2015<br />
** Creating cloud site with GPGPU support<br />
Configuration: master node, 2 worker nodes (IBM dx360 M4 servers, see above)<br />
Base OS: Ubuntu 14.04.2 LTS<br />
Hypervisor: KVM<br />
Middleware: Openstack Kilo<br />
<br />
* July 2015<br />
** Creating cloud site with GPGPU support<br />
Cloud site created at keystone3.ui.savba.sk, master + two worker nodes, configuration reported above<br />
Creating VM images for GPGPU (based on Ubuntu 14.04, GPU driver and libraries)<br />
<br />
* August 2015<br />
** Testing cloud site with GPGPU support<br />
Performance testing and tuning with GPGPU in Openstack <br />
- comparing performance of cloud-based VM with non-cloud virtualization and physical machine, finding discrepancies and tuning them<br />
- setting CPU flavor in Openstack nova (performance optimization) <br />
- Adjusting Openstack scheduler<br />
<br />
Starting process of integration of the site to EGI FedCloud<br />
- Keystone VOMS support being integrated<br />
- OCCI in preparation, installation planned in September<br />
<br />
* September 2015<br />
Continue integration to EGI-FedCloud<br />
<br />
* October 2015<br />
Full integration to EGI-FedCloud, being in certification process<br />
Support for moldyngrid, enmr.eu and vo.lifewatch.eu VO<br />
<br />
* November 2015<br />
Create new authentication module for logging into Horizon dashboard via keystone token<br />
Various client tools: getting token, installing nvidia+cuda,<br />
Participation on EGI Community Forum v Bari<br />
Site certificated<br />
<br />
* December 2015<br />
User support: adding and testing images from various VOs, solving problems with multiple-VO users<br />
Maintenance: security updates and minor improvements<br />
<br />
* January 2016<br />
Testing + performance tuning OpenCL<br />
Updating images with CUDA<br />
Adding Openstack Ceilometer for betting resource monitoring/accounting<br />
<br />
* February-March 2016<br />
Testing VM migration<br />
Examining GLUE schemes<br />
Examining accounting format and tools<br />
<br />
* April 2016 <br />
Status report presented at EGI Conference 2016<br />
<br />
* May 2016<br />
[https://cernbox.cern.ch/index.php/s/JPGIMJunHMl37Bo GLUE2.1 draft] discussed at GLUE-WG meeting and updated with relevant Accelerator card specific attributes.<br />
GPGPU experimental support enabled on CESNET-Metacloud site. VMs with Tesla M2090 GPU cards tested with DisVis program. <br />
Working on support for GPU with LXC/LXD hypervisor with Openstack, which would provide better performance than KVM.<br />
<br />
* Next steps <br />
Production, application support<br />
Cooperation with APEL team on accounting of GPUs<br />
Generating II according to GLUE 2.1<br />
<br />
= [https://wiki.egi.eu/wiki/EGI-Engage:TASK_JRA2.4_Accelerated_Computing Back to Accelerated Computing task] =</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Competence_centre_MoBrain&diff=93471
Competence centre MoBrain
2017-03-02T14:00:20Z
<p>Verlato: /* How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016) */</p>
<hr />
<div>{{Template:EGI-Engage_CC}} <br />
<br />
<br />
{{TOC_right}} <br />
<br />
[[Image:Mobrain.png|300px|right]]<br />
<br />
<br />
'''MoBrain: A Competence Center to Serve Translational Research from Molecule to Brain'''<br />
<br />
<br> <br />
<br> <br />
<br />
'''CC Coordinator:''' Alexandre M.J.J. Bonvin<br />
<br />
'''CC Coordinator deputy:''' Antonio Rosato<br />
<br />
'''CC members' list:''' cc-mobrain AT mailman.egi.eu <br />
<br />
'''CC meetings:''' https://indico.egi.eu/indico/categoryDisplay.py?categId=145 <br />
<br />
<br> <br />
<br />
<br />
== Introduction ==<br />
<br />
Today’s translational research era calls for innovative solutions to enable researchers and clinicians to build bridges between the microscopic (molecular) and macroscopic (human) scales. This requires building both transversal and vertical connections between techniques, researchers and clinicians to provide them with an optimal e-Science toolbox to tackle societal challenges related to health. <br />
<br />
The main objective of the MoBrain Competence Center (CC) is to lower barriers for scientists to access modern e-Science solutions from micro to macro scales. MoBrain builds on grid- and cloud-based infrastructures and on the existing expertise available within WeNMR (www.wenmr.eu), N4U (neugrid4you.eu), and technology providers (NGIs and other institutions, OSG). This initiative aims to serve its user communities, related ESFRI projects (e.g. INSTRUCT) and in the long term the Human Brain Project (FET Flagship), and strengthen the EGI services offering. <br />
<br />
By integrating molecular structural biology and medical imaging services and data, MoBrain will kick-start the development of a larger, integrated, global science virtual research environment for life and brain scientists worldwide. The mini-projects defined in MoBrain are geared toward facilitating this overall objective, each with specific objectives to reinforce existing services, develop new solutions and pave the path to global competence center and virtual research environment for translational research from molecular to brain. <br />
<br />
There are already many services and support/training mechanisms in place that will be further developed, optimized and merged during the operation of the CC, building onto and contributing to the EGI service offering. MoBrain will produce a working environment that will be better tailored to the end user needs than any of its individual components. It will provide an extended portfolio of tools and data in a user-friendly e-laboratory, with a direct relevance for neuroscience, starting from the quantification of the molecular forces, protein folding, biomolecular interactions, drug design and treatments, improved diagnostic and the full characterization of every pathological mechanism of brain diseases through both phenomenological as well as mechanistic approaches. <br />
<br />
<br />
<br />
== MoBrain partners ==<br />
<br />
*Utrecht University, Bijvoet Center for Biomolecular Research, the Netherlands <br />
*Consorzio Interuniversitario Risonanze Magnetiche Di Metalloproteine Paramagnetiche, Florence University, Italy. <br />
*Consejo Superior de Investigaciones Cientificas (and Spanish NGI) <br />
*Science and Technology Facility Council, UK <br />
*Provincia Lombardo Veneta Ordine Ospedaliero di SanGiovanni di Dio – Fatebenefratelli, Italy <br />
*GNUBILA, France <br />
*Istituto Nazionale Di Fisica Nucleare, Italy <br />
*SURFsara (Dutch NGI) <br />
*CESNET (Czech NGI) <br />
*Open Science Grid (US)<br />
<br />
The CC is open for additional members. Please email the CC coordinator to join. <br />
<br />
<br />
== Tasks ==<br />
<br />
<br />
=== T1: Cryo-EM in the cloud: bringing clouds to the data ===<br />
<br />
=== T2: GPU portals for biomolecular simulations ===<br />
<br />
=== T3: Integrating the micro- (WeNMR/INSTRUCT) and macroscopic (NeuGRID4you) virtual research communities ===<br />
<br />
=== T4: User support and training ===<br />
<br />
== Deliverables ==<br />
<br />
* D6.4 Fully integrated MoBrain web portal (OTHER), M12 (02.2016) - by T3<br />
* D6.7 Implementation and evaluation of AMBER and/or GROMACS (R), M13 (03.2016) - by T2<br />
* D6.12 GPGPU-enabled web portal(s) for MoBrain (OTHER), M16 (06.2016) - by T2&3<br />
* D6.14 Scipion cloud deployment for MoBrain (OTHER), M21 (11.2016) - by T1<br />
<br />
<br />
[[Category:EGI-Engage]]<br />
<br />
== Technical documentations ==<br />
<br />
=== How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016)===<br />
<br />
*A docker image with opencl + nvidia drivers + DisVis application and one with PowerFit application ready to run on GPU servers have been prepared for Ubuntu, with the goals of checking performance described [https://drive.google.com/file/d/0B44fOmGpGehGWVVTXzZmU3F1WlE/view here]. <br />
*In collaboration with [https://wiki.egi.eu/wiki/EGI-Engage:TASK_JRA2.4_Accelerated_Computing EGI-Engage task JRA2.4 (Accelerated Computing)] the DisVis and PowerFit docker images have been then re-build to work on the SL6 GPU servers which had a NVIDIA driver 319.x supporting only CUDA 5.5 version. These servers form the grid-enabled cluster at CIRMMP, and successful grid submissions to this cluster using the enmr.eu VO has been carried out with the expected performance the 1st of December 2015.<br />
*'''The 14th of July 2016 the CIRMMP servers were updated with the latest NVIDIA driver 352.93 supporting now CUDA 7.5 version. DisVis and PowerFit images used are now the ones maintained by the INDIGO-DataCloud project and distributed via dockerhub at [https://github.com/indigo-dc/ansible-role-disvis-powerfit indigodatacloudapps repository].''' <br />
*Here follows a description of the scripts and commands used to run the test:<br />
<pre>$ voms-proxy-init --voms enmr.eu <br />
$ glite-ce-job-submit -o jobid.txt -a -r cegpu.cerm.unifi.it:8443/cream-pbs-batch disvis.jdl</pre> <br />
where disvis.jdl is:&nbsp; <br />
<br />
[<br />
executable = "disvis.sh";<br />
inputSandbox = { "disvis.sh" ,"O14250.pdb" , "Q9UT97.pdb" , "restraints.dat" };<br />
stdoutput = "disvis.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "disvis.err";<br />
outputsandbox = { "disvis.out" , "disvis.err" , "res-gpu.tgz"};<br />
GPUNumber=1;<br />
]<br />
<br />
<br />
and disvis.sh is (assuming docker engine is installed on the grid WNs): <br />
<pre>#!/bin/sh<br />
driver=$(nvidia-smi | awk '/Driver Version/ {print $6}')<br />
echo hostname=$(hostname) <br />
echo user=$(id) <br />
export WDIR=`pwd` <br />
mkdir res-gpu<br />
echo docker run disvis... <br />
echo starttime=$(date)<br />
rnd=$RANDOM <br />
docker run --name=disvis-$rnd --device=/dev/nvidia0:/dev/nvidia0 --device=/dev/nvidia1:/dev/nvidia1 \<br />
--device=/dev/nvidiactl:/dev/nvidiactl --device=/dev/nvidia-uvm:/dev/nvidia-uvm \<br />
-v $WDIR:/home indigodatacloudapps/disvis:nvdrv_$driver /bin/sh \<br />
-c 'disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/res-gpu; \<br />
nvidia-smi --query-accounted-apps=timestamp,pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv' <br />
docker rm disvis-$rnd<br />
echo endtime=$(date) <br />
tar cfz res-gpu.tgz res-gpu</pre> <br />
<br />
=== How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 19th of September 2016)===<br />
<br />
If docker engine is not available on the grid WNs, you can use the INDIGO-DataCloud "udocker" tool. This has the advantage that docker containers are run in the user space, so the grid user does not obtain root privileges in the WN, avoiding this way any security concern. The disvis.sh file in this case is as below:<br />
<pre>#!/bin/sh<br />
driver=$(nvidia-smi | awk '/Driver Version/ {print $6}')<br />
echo hostname=$(hostname)<br />
echo user=$(id)<br />
export WDIR=`pwd`<br />
echo udocker run disvis...<br />
echo starttime=$(date)<br />
git clone https://github.com/indigo-dc/udocker<br />
cd udocker<br />
./udocker.py pull indigodatacloudapps/disvis:nvdrv_$driver<br />
echo time after pull = $(date)<br />
rnd=$RANDOM<br />
./udocker.py create --name=disvis-$rnd indigodatacloudapps/disvis:nvdrv_$driver<br />
echo time after udocker create = $(date)<br />
mkdir $WDIR/out<br />
./udocker.py run --hostenv --volume=$WDIR:/home disvis-$rnd disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/out<br />
nvidia-smi --query-accounted-apps=timestamp,pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv<br />
echo time after udocker run = $(date)<br />
./udocker.py rm disvis-$rnd<br />
./udocker.py rmi indigodatacloudapps/disvis:nvdrv_$driver<br />
cd $WDIR<br />
tar zcvf res-gpu.tgz out/<br />
echo endtime=$(date)</pre> <br />
<br />
The example input files specified in the jdl script above were taken from the GitHub repo for disvis available from: [https://github.com/haddocking/disvis https://github.com/haddocking/disvis] <br />
<br />
The performance on the GPGPU grid resources is what is expected for the card type. <br />
<br />
The timings were compared to an in-house GPU node at Utrecht University (GTX680 card), but can differ with the last update of the DisVis code: <br />
<pre>GPGPU-type Timing[minutes]<br />
GTX680 19<br />
M2090 (VM) 15.5<br />
1xK20 (VM) 13.5<br />
2xK20 11<br />
1xK20 11</pre> <br />
This indicates that DisVis, as expected, is currently incapable of using both the available GPGPUs. However, we plan to use this testbed to parallelize it on mulitple GPUs, which should be relatively straightforward. Values marked with (VM) refers to GPGPUs hosted in the FedCloud (see next section).<br />
<br />
For running PowerFit just replace disvis.jdl with a powerfit.jdl like e.g.:&nbsp;<br />
<br />
[<br />
executable = "powerfit.sh";<br />
inputSandbox = { "powerfit.sh" ,"1046.map" , "GroES_1gru.pdb" };<br />
stdoutput = "powerfit.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "powerfit.err";<br />
outputsandbox = { "powerfit.out" , "powerfit.err" , "res-gpu.tgz"};<br />
GPUNumber=1;<br />
]<br />
<br />
with powerfit.sh as [http://pastebin.com/uugj8ZQv here] and input data taken from [http://www.lip.pt/~david/powerfit-example.tgz here] (courtesy of Mario David)<br />
<br />
=== How to run the DisVis and PowerFit on VMs of the Federated Cloud using the enmr.eu VO (updated 14th of July 2016)===<br />
*Scripts are available to instantiate GPGPU-enabled VMs on IISAS (CentOS7 images with Tesla K20m GPUs) and CESNET (Ubuntu images with M2090 GPUs) FedCloud sites, installing latest NVIDIA drivers and DisVis and/or PowerFit software: [http://pastebin.com/5ueQRerr CESNET-create-VM.sh] and [http://pastebin.com/ARxadbn9 IISAS-create-VM.sh]<br />
* The scripts use the following user_data to contextualise the VMs in order to be ready to install the needed software: [http://pastebin.com/f48qMmsN user_data_ubuntu] and [http://pastebin.com/xCdhkTZr user_data_centos7]. Customise them by inserting your public ssh-key. <br />
*After having run the scripts above (voms-proxy-init -voms enmr.eu -r is required before executing them, from a host with an [https://wiki.egi.eu/wiki/Fedcloud-tf:CLI_Environment occi client] installed), you have to ssh in the VM with your ssh-key, then sudo su -, and then execute ./install-gpu-driver.sh. After that the VM is ready to install the application software,by executing ./install-disvis.sh and/or ./install-powerfit.sh. Scripts for running test samples are available in /home/run-disvisGPU.sh and /home/run-powerfitGPU.sh.</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Competence_centre_MoBrain&diff=93277
Competence centre MoBrain
2017-02-23T08:49:07Z
<p>Verlato: </p>
<hr />
<div>{{Template:EGI-Engage_CC}} <br />
<br />
<br />
{{TOC_right}} <br />
<br />
[[Image:Mobrain.png|300px|right]]<br />
<br />
<br />
'''MoBrain: A Competence Center to Serve Translational Research from Molecule to Brain'''<br />
<br />
<br> <br />
<br> <br />
<br />
'''CC Coordinator:''' Alexandre M.J.J. Bonvin<br />
<br />
'''CC Coordinator deputy:''' Antonio Rosato<br />
<br />
'''CC members' list:''' cc-mobrain AT mailman.egi.eu <br />
<br />
'''CC meetings:''' https://indico.egi.eu/indico/categoryDisplay.py?categId=145 <br />
<br />
<br> <br />
<br />
<br />
== Introduction ==<br />
<br />
Today’s translational research era calls for innovative solutions to enable researchers and clinicians to build bridges between the microscopic (molecular) and macroscopic (human) scales. This requires building both transversal and vertical connections between techniques, researchers and clinicians to provide them with an optimal e-Science toolbox to tackle societal challenges related to health. <br />
<br />
The main objective of the MoBrain Competence Center (CC) is to lower barriers for scientists to access modern e-Science solutions from micro to macro scales. MoBrain builds on grid- and cloud-based infrastructures and on the existing expertise available within WeNMR (www.wenmr.eu), N4U (neugrid4you.eu), and technology providers (NGIs and other institutions, OSG). This initiative aims to serve its user communities, related ESFRI projects (e.g. INSTRUCT) and in the long term the Human Brain Project (FET Flagship), and strengthen the EGI services offering. <br />
<br />
By integrating molecular structural biology and medical imaging services and data, MoBrain will kick-start the development of a larger, integrated, global science virtual research environment for life and brain scientists worldwide. The mini-projects defined in MoBrain are geared toward facilitating this overall objective, each with specific objectives to reinforce existing services, develop new solutions and pave the path to global competence center and virtual research environment for translational research from molecular to brain. <br />
<br />
There are already many services and support/training mechanisms in place that will be further developed, optimized and merged during the operation of the CC, building onto and contributing to the EGI service offering. MoBrain will produce a working environment that will be better tailored to the end user needs than any of its individual components. It will provide an extended portfolio of tools and data in a user-friendly e-laboratory, with a direct relevance for neuroscience, starting from the quantification of the molecular forces, protein folding, biomolecular interactions, drug design and treatments, improved diagnostic and the full characterization of every pathological mechanism of brain diseases through both phenomenological as well as mechanistic approaches. <br />
<br />
<br />
<br />
== MoBrain partners ==<br />
<br />
*Utrecht University, Bijvoet Center for Biomolecular Research, the Netherlands <br />
*Consorzio Interuniversitario Risonanze Magnetiche Di Metalloproteine Paramagnetiche, Florence University, Italy. <br />
*Consejo Superior de Investigaciones Cientificas (and Spanish NGI) <br />
*Science and Technology Facility Council, UK <br />
*Provincia Lombardo Veneta Ordine Ospedaliero di SanGiovanni di Dio – Fatebenefratelli, Italy <br />
*GNUBILA, France <br />
*Istituto Nazionale Di Fisica Nucleare, Italy <br />
*SURFsara (Dutch NGI) <br />
*CESNET (Czech NGI) <br />
*Open Science Grid (US)<br />
<br />
The CC is open for additional members. Please email the CC coordinator to join. <br />
<br />
<br />
== Tasks ==<br />
<br />
<br />
=== T1: Cryo-EM in the cloud: bringing clouds to the data ===<br />
<br />
=== T2: GPU portals for biomolecular simulations ===<br />
<br />
=== T3: Integrating the micro- (WeNMR/INSTRUCT) and macroscopic (NeuGRID4you) virtual research communities ===<br />
<br />
=== T4: User support and training ===<br />
<br />
== Deliverables ==<br />
<br />
* D6.4 Fully integrated MoBrain web portal (OTHER), M12 (02.2016) - by T3<br />
* D6.7 Implementation and evaluation of AMBER and/or GROMACS (R), M13 (03.2016) - by T2<br />
* D6.12 GPGPU-enabled web portal(s) for MoBrain (OTHER), M16 (06.2016) - by T2&3<br />
* D6.14 Scipion cloud deployment for MoBrain (OTHER), M21 (11.2016) - by T1<br />
<br />
<br />
[[Category:EGI-Engage]]<br />
<br />
== Technical documentations ==<br />
<br />
=== How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016)===<br />
<br />
*A docker image with opencl + nvidia drivers + DisVis application and one with PowerFit application ready to run on GPU servers have been prepared for Ubuntu, with the goals of checking performance described [https://drive.google.com/file/d/0B44fOmGpGehGWVVTXzZmU3F1WlE/view here]. <br />
*In collaboration with [https://wiki.egi.eu/wiki/EGI-Engage:TASK_JRA2.4_Accelerated_Computing EGI-Engage task JRA2.4 (Accelerated Computing)] the DisVis and PowerFit docker images have been then re-build to work on the SL6 GPU servers which had a NVIDIA driver 319.x supporting only CUDA 5.5 version. These servers form the grid-enabled cluster at CIRMMP, and successful grid submissions to this cluster using the enmr.eu VO has been carried out with the expected performance the 1st of December 2015.<br />
*'''The 14th of July 2016 the CIRMMP servers were updated with the latest NVIDIA driver 352.93 supporting now CUDA 7.5 version. DisVis and PowerFit images used are now the ones maintained by the INDIGO-DataCloud project and distributed via dockerhub at [https://github.com/indigo-dc/ansible-role-disvis-powerfit indigodatacloudapps repository].''' <br />
*Here follows a description of the scripts and commands used to run the test:<br />
<pre>$ voms-proxy-init --voms enmr.eu <br />
$ glite-ce-job-submit -o jobid.txt -a -r cegpu.cerm.unifi.it:8443/cream-pbs-batch disvis.jdl</pre> <br />
where disvis.jdl is:&nbsp; <br />
<br />
[<br />
executable = "disvis.sh";<br />
inputSandbox = { "disvis.sh" ,"O14250.pdb" , "Q9UT97.pdb" , "restraints.dat" };<br />
stdoutput = "disvis.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "disvis.err";<br />
outputsandbox = { "disvis.out" , "disvis.err" , "res-gpu.tgz"};<br />
GPUNumber=1;<br />
]<br />
<br />
<br />
and disvis.sh is (assuming docker engine is installed on the grid WNs): <br />
<pre>#!/bin/sh<br />
driver=$(nvidia-smi | awk '/Driver Version/ {print $6}')<br />
echo hostname=$(hostname) <br />
echo user=$(id) <br />
export WDIR=`pwd` <br />
mkdir res-gpu<br />
echo docker run disvis... <br />
echo starttime=$(date)<br />
rnd=$RANDOM <br />
docker run --name=disvis-$rnd --device=/dev/nvidia0:/dev/nvidia0 --device=/dev/nvidia1:/dev/nvidia1 \<br />
--device=/dev/nvidiactl:/dev/nvidiactl --device=/dev/nvidia-uvm:/dev/nvidia-uvm \<br />
-v $WDIR:/home indigodatacloudapps/disvis:nvdrv_$driver /bin/sh \<br />
-c 'disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/res-gpu; \<br />
nvidia-smi --query-accounted-apps=timestamp,pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv' <br />
docker rm disvis-$rnd<br />
echo endtime=$(date) <br />
tar cfz res-gpu.tgz res-gpu</pre> <br />
<br />
'''Important update of the 19th September 2016''':<br />
<br />
If docker engine is not available on the grid WNs, you can use the INDIGO-DataCloud "udocker" tool. This has the advantage that docker containers are run in the user space, so the grid user does not obtain root privileges in the WN, avoiding this way any security concern. The disvis.sh file in this case is as below:<br />
<pre>#!/bin/sh<br />
driver=$(nvidia-smi | awk '/Driver Version/ {print $6}')<br />
echo hostname=$(hostname)<br />
echo user=$(id)<br />
export WDIR=`pwd`<br />
echo udocker run disvis...<br />
echo starttime=$(date)<br />
git clone https://github.com/indigo-dc/udocker<br />
cd udocker<br />
./udocker.py pull indigodatacloudapps/disvis:nvdrv_$driver<br />
echo time after pull = $(date)<br />
rnd=$RANDOM<br />
./udocker.py create --name=disvis-$rnd indigodatacloudapps/disvis:nvdrv_$driver<br />
echo time after udocker create = $(date)<br />
mkdir $WDIR/out<br />
./udocker.py run --hostenv --volume=$WDIR:/home disvis-$rnd disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/out<br />
nvidia-smi --query-accounted-apps=timestamp,pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv<br />
echo time after udocker run = $(date)<br />
./udocker.py rm disvis-$rnd<br />
./udocker.py rmi indigodatacloudapps/disvis:nvdrv_$driver<br />
cd $WDIR<br />
tar zcvf res-gpu.tgz out/<br />
echo endtime=$(date)</pre> <br />
<br />
The example input files specified in the jdl script above were taken from the GitHub repo for disvis available from: [https://github.com/haddocking/disvis https://github.com/haddocking/disvis] <br />
<br />
The performance on the GPGPU grid resources is what is expected for the card type. <br />
<br />
The timings were compared to an in-house GPU node at Utrecht University (GTX680 card), but can differ with the last update of the DisVis code: <br />
<pre>GPGPU-type Timing[minutes]<br />
GTX680 19<br />
M2090 (VM) 15.5<br />
1xK20 (VM) 13.5<br />
2xK20 11<br />
1xK20 11</pre> <br />
This indicates that DisVis, as expected, is currently incapable of using both the available GPGPUs. However, we plan to use this testbed to parallelize it on mulitple GPUs, which should be relatively straightforward. Values marked with (VM) refers to GPGPUs hosted in the FedCloud (see next section).<br />
<br />
For running PowerFit just replace disvis.jdl with a powerfit.jdl like e.g.:&nbsp;<br />
<br />
[<br />
executable = "powerfit.sh";<br />
inputSandbox = { "powerfit.sh" ,"1046.map" , "GroES_1gru.pdb" };<br />
stdoutput = "powerfit.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "powerfit.err";<br />
outputsandbox = { "powerfit.out" , "powerfit.err" , "res-gpu.tgz"};<br />
GPUNumber=1;<br />
]<br />
<br />
with powerfit.sh as [http://pastebin.com/uugj8ZQv here] and input data taken from [http://www.lip.pt/~david/powerfit-example.tgz here] (courtesy of Mario David)<br />
<br />
=== How to run the DisVis and PowerFit on VMs of the Federated Cloud using the enmr.eu VO (updated 14th of July 2016)===<br />
*Scripts are available to instantiate GPGPU-enabled VMs on IISAS (CentOS7 images with Tesla K20m GPUs) and CESNET (Ubuntu images with M2090 GPUs) FedCloud sites, installing latest NVIDIA drivers and DisVis and/or PowerFit software: [http://pastebin.com/5ueQRerr CESNET-create-VM.sh] and [http://pastebin.com/ARxadbn9 IISAS-create-VM.sh]<br />
* The scripts use the following user_data to contextualise the VMs in order to be ready to install the needed software: [http://pastebin.com/f48qMmsN user_data_ubuntu] and [http://pastebin.com/xCdhkTZr user_data_centos7]. Customise them by inserting your public ssh-key. <br />
*After having run the scripts above (voms-proxy-init -voms enmr.eu -r is required before executing them, from a host with an [https://wiki.egi.eu/wiki/Fedcloud-tf:CLI_Environment occi client] installed), you have to ssh in the VM with your ssh-key, then sudo su -, and then execute ./install-gpu-driver.sh. After that the VM is ready to install the application software,by executing ./install-disvis.sh and/or ./install-powerfit.sh. Scripts for running test samples are available in /home/run-disvisGPU.sh and /home/run-powerfitGPU.sh.</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Competence_centre_MoBrain&diff=93276
Competence centre MoBrain
2017-02-23T08:33:15Z
<p>Verlato: </p>
<hr />
<div>{{Template:EGI-Engage_CC}} <br />
<br />
<br />
{{TOC_right}} <br />
<br />
[[Image:Mobrain.png|300px|right]]<br />
<br />
<br />
'''MoBrain: A Competence Center to Serve Translational Research from Molecule to Brain'''<br />
<br />
<br> <br />
<br> <br />
<br />
'''CC Coordinator:''' Alexandre M.J.J. Bonvin<br />
<br />
'''CC Coordinator deputy:''' Antonio Rosato<br />
<br />
'''CC members' list:''' cc-mobrain AT mailman.egi.eu <br />
<br />
'''CC meetings:''' https://indico.egi.eu/indico/categoryDisplay.py?categId=145 <br />
<br />
<br> <br />
<br />
<br />
== Introduction ==<br />
<br />
Today’s translational research era calls for innovative solutions to enable researchers and clinicians to build bridges between the microscopic (molecular) and macroscopic (human) scales. This requires building both transversal and vertical connections between techniques, researchers and clinicians to provide them with an optimal e-Science toolbox to tackle societal challenges related to health. <br />
<br />
The main objective of the MoBrain Competence Center (CC) is to lower barriers for scientists to access modern e-Science solutions from micro to macro scales. MoBrain builds on grid- and cloud-based infrastructures and on the existing expertise available within WeNMR (www.wenmr.eu), N4U (neugrid4you.eu), and technology providers (NGIs and other institutions, OSG). This initiative aims to serve its user communities, related ESFRI projects (e.g. INSTRUCT) and in the long term the Human Brain Project (FET Flagship), and strengthen the EGI services offering. <br />
<br />
By integrating molecular structural biology and medical imaging services and data, MoBrain will kick-start the development of a larger, integrated, global science virtual research environment for life and brain scientists worldwide. The mini-projects defined in MoBrain are geared toward facilitating this overall objective, each with specific objectives to reinforce existing services, develop new solutions and pave the path to global competence center and virtual research environment for translational research from molecular to brain. <br />
<br />
There are already many services and support/training mechanisms in place that will be further developed, optimized and merged during the operation of the CC, building onto and contributing to the EGI service offering. MoBrain will produce a working environment that will be better tailored to the end user needs than any of its individual components. It will provide an extended portfolio of tools and data in a user-friendly e-laboratory, with a direct relevance for neuroscience, starting from the quantification of the molecular forces, protein folding, biomolecular interactions, drug design and treatments, improved diagnostic and the full characterization of every pathological mechanism of brain diseases through both phenomenological as well as mechanistic approaches. <br />
<br />
<br />
<br />
== MoBrain partners ==<br />
<br />
*Utrecht University, Bijvoet Center for Biomolecular Research, the Netherlands <br />
*Consorzio Interuniversitario Risonanze Magnetiche Di Metalloproteine Paramagnetiche, Florence University, Italy. <br />
*Consejo Superior de Investigaciones Cientificas (and Spanish NGI) <br />
*Science and Technology Facility Council, UK <br />
*Provincia Lombardo Veneta Ordine Ospedaliero di SanGiovanni di Dio – Fatebenefratelli, Italy <br />
*GNUBILA, France <br />
*Istituto Nazionale Di Fisica Nucleare, Italy <br />
*SURFsara (Dutch NGI) <br />
*CESNET (Czech NGI) <br />
*Open Science Grid (US)<br />
<br />
The CC is open for additional members. Please email the CC coordinator to join. <br />
<br />
<br />
== Tasks ==<br />
<br />
<br />
=== T1: Cryo-EM in the cloud: bringing clouds to the data ===<br />
<br />
=== T2: GPU portals for biomolecular simulations ===<br />
<br />
=== T3: Integrating the micro- (WeNMR/INSTRUCT) and macroscopic (NeuGRID4you) virtual research communities ===<br />
<br />
=== T4: User support and training ===<br />
<br />
== Deliverables ==<br />
<br />
* D6.4 Fully integrated MoBrain web portal (OTHER), M12 (02.2016) - by T3<br />
* D6.7 Implementation and evaluation of AMBER and/or GROMACS (R), M13 (03.2016) - by T2<br />
* D6.12 GPGPU-enabled web portal(s) for MoBrain (OTHER), M16 (06.2016) - by T2&3<br />
* D6.14 Scipion cloud deployment for MoBrain (OTHER), M21 (11.2016) - by T1<br />
<br />
<br />
[[Category:EGI-Engage]]<br />
<br />
== Technical documentations ==<br />
<br />
=== How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016)===<br />
<br />
*A docker image with opencl + nvidia drivers + DisVis application and one with PowerFit application ready to run on GPU servers have been prepared for Ubuntu, with the goals of checking performance described [https://drive.google.com/file/d/0B44fOmGpGehGWVVTXzZmU3F1WlE/view here]. <br />
*In collaboration with [https://wiki.egi.eu/wiki/EGI-Engage:TASK_JRA2.4_Accelerated_Computing EGI-Engage task JRA2.4 (Accelerated Computing)] the DisVis and PowerFit docker images have been then re-build to work on the SL6 GPU servers which had a NVIDIA driver 319.x supporting only CUDA 5.5 version. These servers form the grid-enabled cluster at CIRMMP, and successful grid submissions to this cluster using the enmr.eu VO has been carried out with the expected performance the 1st of December 2015.<br />
*'''The 14th of July 2016 the CIRMMP servers were updated with the latest NVIDIA driver 352.93 supporting now CUDA 7.5 version. DisVis and PowerFit images used are now the ones maintained by the INDIGO-DataCloud project and distributed via dockerhub at [https://github.com/indigo-dc/ansible-role-disvis-powerfit indigodatacloudapps repository].''' <br />
*Here follows a description of the scripts and commands used to run the test:<br />
<pre>$ voms-proxy-init --voms enmr.eu <br />
$ glite-ce-job-submit -o jobid.txt -a -r cegpu.cerm.unifi.it:8443/cream-pbs-batch disvis.jdl</pre> <br />
where disvis.jdl is:&nbsp; <br />
<br />
[<br />
executable = "disvis.sh";<br />
inputSandbox = { "disvis.sh" ,"O14250.pdb" , "Q9UT97.pdb" , "restraints.dat" };<br />
stdoutput = "disvis.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "disvis.err";<br />
outputsandbox = { "disvis.out" , "disvis.err" , "res-gpu.tgz"};<br />
GPUNumber=1;<br />
]<br />
<br />
<br />
and disvis.sh is (assuming docker engine is installed on the grid WNs): <br />
<pre>#!/bin/sh<br />
driver=$(nvidia-smi | awk '/Driver Version/ {print $6}')<br />
echo hostname=$(hostname) <br />
echo user=$(id) <br />
export WDIR=`pwd` <br />
mkdir res-gpu<br />
echo docker run disvis... <br />
echo starttime=$(date)<br />
rnd=$RANDOM <br />
docker run --name=disvis-$rnd --device=/dev/nvidia0:/dev/nvidia0 --device=/dev/nvidia1:/dev/nvidia1 \<br />
--device=/dev/nvidiactl:/dev/nvidiactl --device=/dev/nvidia-uvm:/dev/nvidia-uvm \<br />
-v $WDIR:/home indigodatacloudapps/disvis:nvdrv_$driver /bin/sh \<br />
-c 'disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/res-gpu; \<br />
nvidia-smi --query-accounted-apps=timestamp,pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv' <br />
docker rm disvis-$rnd<br />
echo endtime=$(date) <br />
tar cfz res-gpu.tgz res-gpu</pre> <br />
<br />
'''Important update of the 19th September 2016''':<br />
<br />
If docker engine is not available on the grid WNs, you can use the INDIGO-DataCloud "udocker" tool. This has the advantage that docker containers are run in the user space, so the grid user does not obtain root privileges in the WN, avoiding this way any security concern. The disvis.sh file in this case is as below:<br />
<pre>#!/bin/sh<br />
driver=$(nvidia-smi | awk '/Driver Version/ {print $6}')<br />
echo hostname=$(hostname)<br />
echo user=$(id)<br />
export WDIR=`pwd`<br />
echo udocker run disvis...<br />
echo starttime=$(date)<br />
git clone https://github.com/indigo-dc/udocker<br />
cd udocker<br />
./udocker.py pull indigodatacloudapps/disvis:nvdrv_$driver<br />
echo time after pull = $(date)<br />
rnd=$RANDOM<br />
./udocker.py create --name=disvis-$rnd indigodatacloudapps/disvis:nvdrv_$driver<br />
echo time after udocker create = $(date)<br />
mkdir $WDIR/out<br />
./udocker.py run --hostenv --volume=$WDIR:/home disvis-$rnd disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/out<br />
echo time after udocker run = $(date)<br />
./udocker.py rm disvis-$rnd<br />
./udocker.py rmi indigodatacloudapps/disvis:nvdrv_$driver<br />
cd $WDIR<br />
tar zcvf res-gpu.tgz out/<br />
echo endtime=$(date)</pre> <br />
<br />
The example input files specified in the jdl script above were taken from the GitHub repo for disvis available from: [https://github.com/haddocking/disvis https://github.com/haddocking/disvis] <br />
<br />
The performance on the GPGPU grid resources is what is expected for the card type. <br />
<br />
The timings were compared to an in-house GPU node at Utrecht University (GTX680 card), but can differ with the last update of the DisVis code: <br />
<pre>GPGPU-type Timing[minutes]<br />
GTX680 19<br />
M2090 (VM) 15.5<br />
1xK20 (VM) 13.5<br />
2xK20 11<br />
1xK20 11</pre> <br />
This indicates that DisVis, as expected, is currently incapable of using both the available GPGPUs. However, we plan to use this testbed to parallelize it on mulitple GPUs, which should be relatively straightforward. Values marked with (VM) refers to GPGPUs hosted in the FedCloud (see next section).<br />
<br />
For running PowerFit just replace disvis.jdl with a powerfit.jdl like e.g.:&nbsp;<br />
<br />
[<br />
executable = "powerfit.sh";<br />
inputSandbox = { "powerfit.sh" ,"1046.map" , "GroES_1gru.pdb" };<br />
stdoutput = "powerfit.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "powerfit.err";<br />
outputsandbox = { "powerfit.out" , "powerfit.err" , "res-gpu.tgz"};<br />
GPUNumber=1;<br />
]<br />
<br />
with powerfit.sh as [http://pastebin.com/uugj8ZQv here] and input data taken from [http://www.lip.pt/~david/powerfit-example.tgz here] (courtesy of Mario David)<br />
<br />
=== How to run the DisVis and PowerFit on VMs of the Federated Cloud using the enmr.eu VO (updated 14th of July 2016)===<br />
*Scripts are available to instantiate GPGPU-enabled VMs on IISAS (CentOS7 images with Tesla K20m GPUs) and CESNET (Ubuntu images with M2090 GPUs) FedCloud sites, installing latest NVIDIA drivers and DisVis and/or PowerFit software: [http://pastebin.com/5ueQRerr CESNET-create-VM.sh] and [http://pastebin.com/ARxadbn9 IISAS-create-VM.sh]<br />
* The scripts use the following user_data to contextualise the VMs in order to be ready to install the needed software: [http://pastebin.com/f48qMmsN user_data_ubuntu] and [http://pastebin.com/xCdhkTZr user_data_centos7]. Customise them by inserting your public ssh-key. <br />
*After having run the scripts above (voms-proxy-init -voms enmr.eu -r is required before executing them, from a host with an [https://wiki.egi.eu/wiki/Fedcloud-tf:CLI_Environment occi client] installed), you have to ssh in the VM with your ssh-key, then sudo su -, and then execute ./install-gpu-driver.sh. After that the VM is ready to install the application software,by executing ./install-disvis.sh and/or ./install-powerfit.sh. Scripts for running test samples are available in /home/run-disvisGPU.sh and /home/run-powerfitGPU.sh.</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Federated_Cloud_infrastructure_status&diff=93240
Federated Cloud infrastructure status
2017-02-16T14:21:21Z
<p>Verlato: </p>
<hr />
<div>{{Fedcloud_Menu}} {{TOC_right}} <br />
<br />
The purposes of this page are <br />
<br />
*providing a snapshot of the resources that are provided by the Federated Cloud infrastructure <br />
*providing information about the sites that are joining, or have expressed interest in joining the FedCloud <br />
*providing the list of sites supporting the fedcloud.egi.eu VO, which is the VO used to allow the evaluation of the FedCloud infrastructure by a given provider<br />
<br />
If you have any comments on the content of this page, please contact '''operations @ egi.eu'''. <br />
<br />
== Status of the Federated Cloud ==<br />
<br />
The table here shows all Resource Centres fully integrated into the Federated Cloud infrastructure and certified through the [[PROC09|EGI Resource Centre Registration and Certification]]. <br />
<br />
The status of all the services is monitored via the [https://cloudmon.egi.eu/nagios/ cloudmon.egi.eu nagios instance]. <br />
<br />
Details on Resource Centres availability and reliability are available on [http://argo.egi.eu/lavoisier/cloud_reports?accept=html ARGO]. <br />
<br />
Accounting data are available on the [http://accounting.egi.eu/cloud.php EGI Accounting Portal] or in the [http://accounting-devel.egi.eu/cloud.php accounting portal dev instance]. <br />
<br />
'''Last update: May 2016''' <br />
<br />
{| cellspacing="0" cellpadding="5" style="border:1px solid black; text-align:left;" class="wikitable sortable"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | Affiliation <br />
! style="border-bottom:1px solid black;" | CC <br />
! style="border-bottom:1px solid black;" | Main contact <br />
! style="border-bottom:1px solid black;" | Deputies <br />
! style="border-bottom:1px solid black;" | Host Site <br />
! style="border-bottom:1px solid black;" | Production status <br />
! style="border-bottom:1px solid black;" | CMF Version <br />
! style="border-bottom:1px solid black;" | Supporting fedcloud.egi.eu <br />
! style="border-bottom:1px solid black;" | Resource Endpoints <br />
! style="border-bottom:1px solid black;" | Committed Production Launch resources <br />
! style="border-bottom:1px solid black;" | VM maximum size (memory, cpu, storage)<br />
|-<br />
| style="border-bottom:1px dotted silver;" | 100 Percent IT Ltd <br />
| style="border-bottom:1px dotted silver;" | UK <br />
| style="border-bottom:1px dotted silver;" | David Blundell <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | 100IT <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Liberty&nbsp; <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://occi-api.100percentit.com:8787/occi1.1 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 120 Cores with 128 GB RAM <br />
<br />
- 16TB Shared storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | UNIZAR / BIFI <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Ruben Valles <br />
| style="border-bottom:1px dotted silver;" | Carlos Gimeno <br />
| style="border-bottom:1px dotted silver;" | BIFI <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | <br />
[http://server4-epsh.unizar.es:8787 OCCI] (Grizzly)<br> [http://server4-eupt.unizar.es:8787 OCCI] (Icehouse) <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
- 720 Cores with 740 GB RAM <br />
<br />
- 36 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
Name m1.xlarge, VCPUs 8, Root Disk 160 GB, Ephemeral Disk 0 GB, Total Disk 160 GB, RAM 16,384 MB <br />
<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Cyfronet (NGI PL) <br />
| style="border-bottom:1px dotted silver;" | PL <br />
| style="border-bottom:1px dotted silver;" | Tomasz Szepieniec <br />
| style="border-bottom:1px dotted silver;" | Patryk Lasoń, Łukasz Flis <br />
| style="border-bottom:1px dotted silver;" | [https://next.gocdb.eu/portal/index.php?Page_Type=Site&id=966 CYFRONET-CLOUD] <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Juno <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://control.cloud.cyfronet.pl:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 200 Cores with 400 GB RAM <br />
<br />
- 5 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 cores, 16 GB of RAM, 10 GB<br />
|-<br />
| style="border-bottom:1px dotted silver;" | FCTSG <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Rubén Díez <br />
| style="border-bottom:1px dotted silver;" | Iván Díaz <br />
| style="border-bottom:1px dotted silver;" | CESGA <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://cloud.cesga.es:3202/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | - 288 Cores with 592GB RAM (36 Xeon servers with 8 cores, 16GB RAM and local HD 500GB). <br />
- 160 Cores with 512GB RAM (8 Xeon servers with 20 cores, 64GB RAM and local HD 500GB). - Two data stores of 3TB and 700GB. This infrastructure is used to run several core services for EGI.eu and our capacity is compromised due to that. <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 cores, 16GB of RAM, local HD 470 GB<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CESNET <br />
| style="border-bottom:1px dotted silver;" | CZ <br />
| style="border-bottom:1px dotted silver;" | Miroslav Ruda <br />
| style="border-bottom:1px dotted silver;" | Boris Parak <br />
| style="border-bottom:1px dotted silver;" | CESNET-MetaCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://carach5.ics.muni.cz:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 416 Cores with 2.4 TB RAM <br />
<br />
- 56.6 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 32 cores, 185 GB of RAM, approx. 3 TB of attached storage<br />
|-<br />
| style="border-bottom:1px dotted silver;" | FZ Jülich <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Björn Hagemeier <br />
| style="border-bottom:1px dotted silver;" | Shahbaz Memon <br />
| style="border-bottom:1px dotted silver;" | FZJ <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack<br>Kilo <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://egi-cloud.zam.kfa-juelich.de:8787/ OCCI]<br>[https://swift.zam.kfa-juelich.de:8888/ CDMI]<br>[https://fsd-cloud.zam.kfa-juelich.de:5000/v2.0 OpenStack] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 216 Cores with 294 GB RAM <br />
<br />
- 50 TB Storage (~20TB object + ~30TB block) <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
Name:&nbsp;m16<br>16 Cores<br>16 GB RAM<br>Default Disk:&nbsp;20 (Root) + 20 (Ephemeral)<br>1TB attachable block storage<br>Open Ports: 22, 80, 443, 7000-7020 <br />
<br />
|-<br />
| style="border-bottom:1px dotted silver;" | GWDG <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Piotr Kasprzak <br />
| style="border-bottom:1px dotted silver;" | Ramin Yahyapour, Philipp Wieder <br />
| style="border-bottom:1px dotted silver;" | GoeGrid <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4 <br />
| style="border-bottom:1px dotted silver;" | NO <br />
| style="border-bottom:1px dotted silver;" | [https://occi.cloud.gwdg.de:3100/ OCCI]<br>[http://cdmi.cloud.gwdg.de:4001 CDMI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 192 Cores with 768 GB RAM <br />
<br />
&nbsp;- 40 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 64 cores, 240 GB RAM, 3 TB disk<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CSIC/IFCA <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Alvaro Lopez Garcia <br />
| style="border-bottom:1px dotted silver;" | Enol Fernandez, Pablo Orviz <br />
| style="border-bottom:1px dotted silver;" | IFCA-LCG2 <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | <br />
[https://cloud.ifca.es:8787/ OCCI]<br />
<br />
[https://cloud.ifca.es:8774/ OpenStack]<br />
<br />
[https://cephrgw.ifca.es:8080/ Swift]<br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
- 2368 Cores ( 32 nodes x 8 vcpus x 16GB RAM -&nbsp; 36 nodes x 24 vcpus x 48GM RAM - 34 nodes x 32 vcpus x 128GB RAM - 2 nodes x 80 vcpus x 1TB RAM )<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
cm4.4xlarge: 32 vCPUs, 160GB HD, 64GB RAM <br />
<br />
x1.20xlarge: 80 vCPUS, 100GB HD, 1TB RAM (upon request)<br />
<br />
|-<br />
| style="border-bottom:1px dotted silver;" | GRNET <br />
| style="border-bottom:1px dotted silver;" | GR <br />
| style="border-bottom:1px dotted silver;" | Kostas Koumantaros <br />
| style="border-bottom:1px dotted silver;" | Nikolaos Nikoloutsakos, Kyriakos Ginis <br />
| style="border-bottom:1px dotted silver;" | HG-09-Okeanos-Cloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | Synnefo <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://okeanos-occi2.hellasgrid.gr:9000/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 70 CPUs with 220 GB RAM <br />
<br />
<br> - 1 ΤΒ storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 24 Cores, 192GB RAM, 500GB HD&nbsp;<br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Miroslav Dobrucky, Jan Astalos<br />
| style="border-bottom:1px dotted silver;" | IISAS-FedCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nova2.ui.savba.sk:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 80 Cores with 1.5 GB RAM per core<br />
<br />
&nbsp;- 9 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.xlarge: 8 VCPUs, 16GB RAM, 160GB HD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Jan Astalos, Miroslav Dobrucky <br />
| style="border-bottom:1px dotted silver;" | IISAS-GPUCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Liberty <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nova3.ui.savba.sk:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 96 Cores with 4GB RAM per core, 12 GPUs K20m <br />
<br />
&nbsp;- 6 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <span style="color: rgb(51, 51, 51); font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 13px; line-height: 18.5714px; background-color: rgb(245, 245, 245);">gpu2cpu12</span><span style="font-size: 13.28px; line-height: 1.5em;">: 12 VCPUs, 48GB RAM, 200GB HD, 2GPU Tesla K20m</span><br />
|-<br />
| style="border-bottom:1px dotted silver;" rowspan="3" | INFN - Catania <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | IT <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | Roberto Barbera <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | Giuseppe La Rocca, Diego Scardaci <br />
| style="border-bottom:1px dotted silver;" | INFN-CATANIA-NEBULA <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nebula-server-01.ct.infn.it:9000/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 16 cores, 64 GB RAM <br />
<br />
- 5.4 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN-CATANIA-STACK <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://stack-server-01.ct.infn.it:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 16 Cores with 65 GB RAM <br />
<br />
- 16 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | VM maximum size = m1.xlarge (16384 MB, 8 VCPU, 160 GB)<br />
|-<br />
| style="border-bottom:1px dotted silver;" | INDIGO-CATANIA-STACK <br />
| style="border-bottom:1px dotted silver;" | Suspended (2017-02-01) <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [http://stack-server-02.ct.infn.it:8787 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 80 Cores with 320 GB RAM <br />
<br />
- 3TB object storage, 2TB block storage <br />
<br />
| style="border-bottom:1px dotted silver;" | VM maximum size = m1.xlarge (featuring 16GB RAM, 160 GB hard disk, 8CPU)<br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - Bari <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Marica Antonacci <br />
| style="border-bottom:1px dotted silver;" | Giacinto Donvito, Stefano Nicotri, Vincenzo Spinoso <br />
| style="border-bottom:1px dotted silver;" | RECAS-BARI (was:PRISMA-INFN-BARI) <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Juno <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [http://cloud.recas.ba.infn.it:8787/occi OCCI] (no CDMI endpoint provided at the moment, it will be back in the near future) <br />
| style="border-bottom:1px dotted silver;" | <br />
- 300 Cores with 600GB of RAM <br />
<br />
- 50 TB Storage. <br />
<br />
| style="border-bottom:1px dotted silver;" | Flavor "m1.xxlarge": 24 cores, 48GB RAM, 100 GB disk. Up to 500GB block storage can be attached on-demand.<br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN-PADOVA <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Marco Verlato <br />
| style="border-bottom:1px dotted silver;" | Federica Fanzago, Matteo Segatta <br />
| style="border-bottom:1px dotted silver;" | [https://next.gocdb.eu/portal/index.php?Page_Type=Site&id=1024 INFN-PADOVA-STACK] <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Liberty <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://egi-cloud.pd.infn.it:8787/occi1.1/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 144 Cores with 283 GB RAM <br />
<br />
- 2.2TB of overall block storage and 1.8TB of ephemeral storage per compute node <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
m1.hpc: 24 VCPUs, 46GB RAM, 200GB HD, up to 1TB attachable block storage <br />
<br />
Up to 24 public IPs <br />
<br />
|-<br />
| style="border-bottom:1px dotted silver;" | TUBITAK ULAKBIM <br />
| style="border-bottom:1px dotted silver;" | TR <br />
| style="border-bottom:1px dotted silver;" | Feyza Eryol <br />
| style="border-bottom:1px dotted silver;" | Onur Temizsoylu <br />
| style="border-bottom:1px dotted silver;" | TR-FC1-ULAKBIM <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Sahara<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [http://fcctrl.ulakbim.gov.tr:8787/occi1.1/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 168 Cores with 896 GB RAM&nbsp; <br />
<br />
- 40 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | UKIM <br />
| style="border-bottom:1px dotted silver;" | MK <br />
| style="border-bottom:1px dotted silver;" | Boro Jakimovski <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | MK-04-FINKICLOUD <br />
| style="border-bottom:1px dotted silver;" | Suspended (2016-10-20) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nebula.finki.ukim.mk:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 100 Cores with 24 GB RAM - 1 TB Storage <br />
<br />
'''Information for MK-04-FINKICLOUD may be outdated.''' <br />
<br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | CETA-CIEMAT <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Guillermo Díaz Herrero, <br />
| style="border-bottom:1px dotted silver;" | Miguel Ángel Díaz, Abel Paz <br />
| style="border-bottom:1px dotted silver;" | CETA-GRID <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack IceHouse <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://controller.ceta-ciemat.es:8787 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 184 Cores with 224 GB RAM <br />
<br />
- 5.3 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.xxlarge: 8 VCPUs, 12GB RAM, 40GB HD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | IPHC <br />
| style="border-bottom:1px dotted silver;" | FR <br />
| style="border-bottom:1px dotted silver;" | Jerome Pansanel <br />
| style="border-bottom:1px dotted silver;" | Sebastien Geiger, Vincent Legoll <br />
| style="border-bottom:1px dotted silver;" | IN2P3-IRES <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://sbgcloud.in2p3.fr:8787 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 192 Cores with 1232 GB RAM <br />
<br />
- 5 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.2xlarge (CPU: 16, RAM: 32 GB, disk: 320 GB) <br> Monitoring&nbsp;: <br />
*https://cloudmon.egi.eu/nagios/cgi-bin/extinfo.cgi?type=1&amp;host=sbgbdii1.in2p3.fr <br />
*https://cloudmon.egi.eu/nagios/cgi-bin/extinfo.cgi?type=1&amp;host=sbgcloud.in2p3.fr<br />
<br />
|-<br />
| style="border-bottom:1px dotted silver;" | NCG-INGRID-PT / LIP <br />
| style="border-bottom:1px dotted silver;" | PT <br />
| style="border-bottom:1px dotted silver;" | Mario David <br />
| style="border-bottom:1px dotted silver;" | Joao Pina, Joao Martins <br />
| style="border-bottom:1px dotted silver;" | NCG-INGRID-PT <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Openstack Mitaka<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
- 80 Cores with 192 GB RAM <br />
<br />
- 3 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | Name m1.xlarge, VCPUs 8, Root Disk 160 GB, Ephemeral Disk 0 GB, Total Disk 160 GB, RAM 16,384 MB<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Polythecnic University of Valencia, I3M <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Ignacio Blanquer <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | UPV-GRyCAP <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://fc-one.i3m.upv.es:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 128 Cores with 192 GB RAM <br />
<br />
- 5 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 4 VCPUS, 16GB RAM, 160GB HD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Fraunhofer SCAI <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Horst Schwichtenberg <br />
| style="border-bottom:1px dotted silver;" | Andre Gemuend <br />
| style="border-bottom:1px dotted silver;" | SCAI <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka<br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://fc.scai.fraunhofer.de:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 128 physical cores + HT, 244 GB RAM <br />
<br />
- 20 TB Storage (Glance &amp; Cinder) <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 VCPUS, 16GB RAM, 160GB HD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | BELSPO <br />
| style="border-bottom:1px dotted silver;" | BE<br> <br />
| style="border-bottom:1px dotted silver;" | Stephane GERARD <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | BEgrid-BELNET <br />
| style="border-bottom:1px dotted silver;" | Certified (2016-12) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4 <br />
| style="border-bottom:1px dotted silver;" | YES<br> <br />
| style="border-bottom:1px dotted silver;" | [https://rocci.iihe.ac.be:11443/ OCCI]<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
- 160 physical cores + HT, 512 GB RAM <br />
<br />
- 10 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 16 VCPUs, 32GB of RAM, local HD 200 GB<br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Jan Astalos, Miroslav Dobrucky <br />
| style="border-bottom:1px dotted silver;" | IISAS-Nebula <br />
| style="border-bottom:1px dotted silver;" | Certified (2017-01-25) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | [https://nebula2.ui.savba.sk:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | 16 Cores with 64GB RAM and 2 GPUs K20m - Storage 147GB <br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 14 CPU cores, 2GPU, 56GB of RAM, 830GB HD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | RO <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | CLOUDIFIN <br />
| style="border-bottom:1px dotted silver;" | Uncertified <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | <br><br />
|}<br />
<br />
== Integrating resource providers ==<br />
<br />
Last update: may 2015 <br />
<br />
Sites that have a valid GOCDB entry, should also have at least one service type listed and monitored via cloudmon.egi.eu. <br />
<br />
{| cellspacing="0" cellpadding="5" class="wikitable sortable" style="border:1px solid black; text-align:left;"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | Affiliation <br />
! style="border-bottom:1px solid black;" | CC <br />
! style="border-bottom:1px solid black;" | Main Contact <br />
! style="border-bottom:1px solid black;" | Deputies <br />
! style="border-bottom:1px solid black;" | Host Site <br />
! style="border-bottom:1px solid black;" | CMF <br />
! style="border-bottom:1px solid black;" | Certification <br />
! style="border-bottom:1px solid black;" | Resource Endpoints <br />
! style="border-bottom:1px solid black;" | Comment<br />
|-<br />
| style="border-bottom:1px dotted silver;" | KISTI <br />
| style="border-bottom:1px dotted silver;" | KR <br />
| style="border-bottom:1px dotted silver;" | Soonwook Hwang <br />
| style="border-bottom:1px dotted silver;" | Sangwan Kim, Taesang Huh, Jae-Hyuck Kwak <br />
| style="border-bottom:1px dotted silver;" | KR-KISTI-CLOUD <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://fccont.kisti.re.kr:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | 64 cores with 256 GB RAM and 6TB HDD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CSC <br />
| style="border-bottom:1px dotted silver;" | FI <br />
| style="border-bottom:1px dotted silver;" | Jura Tarus <br />
| style="border-bottom:1px dotted silver;" | Luís Alves, Ulf Tigerstedt, Kalle Happonen <br />
| style="border-bottom:1px dotted silver;" | CSC-Cloud <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | Status: Testing resource integration<br />
|}<br />
<br />
== Interested resource providers ==<br />
<br />
{| cellspacing="0" cellpadding="5" class="wikitable sortable" style="border:1px solid black; text-align:left;"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | Affiliation <br />
! style="border-bottom:1px solid black;" | CC <br />
! style="border-bottom:1px solid black;" | Representative <br />
! style="border-bottom:1px solid black;" | Deputies <br />
! style="border-bottom:1px solid black;" | CMF <br />
! style="border-bottom:1px solid black;" | Integration plans <br />
! style="border-bottom:1px solid black;" | Comment<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CSIC-EBD <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Jesús Marco <br />
| style="border-bottom:1px dotted silver;" | Fernando Aguilar, Juan Carlos Sexto <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | to integrate before 30th June <br />
| style="border-bottom:1px dotted silver;" | ~1000 cores (500 cores initially available in FedCloud), 1PB of storage (around 50% devoted to support FedCloud)<br />
|-<br />
| style="border-bottom:1px dotted silver;" | IICT-BAS <br />
| style="border-bottom:1px dotted silver;" | BG <br />
| style="border-bottom:1px dotted silver;" | Emanouil Atanassov <br />
| style="border-bottom:1px dotted silver;" | Todor Gurov <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - Napoli <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Silvio Pardi <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - CNAF <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Cristina Aiftimiei <br />
| style="border-bottom:1px dotted silver;" | Davide Salomoni, Diego Michelotto <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - Torino <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Andrea Guarise <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | CIEMAT <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Rubio Montero <br />
| style="border-bottom:1px dotted silver;" | Rafael Mayo García, Manuel Aurelio Rodríguez Pascual <br />
| style="border-bottom:1px dotted silver;" | OpenNebula <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | SURFsara <br />
| style="border-bottom:1px dotted silver;" | NL <br />
| style="border-bottom:1px dotted silver;" | Ron Trompert <br />
| style="border-bottom:1px dotted silver;" | Maurice Bouwhuis, Machiel Jansen <br />
| style="border-bottom:1px dotted silver;" | OpenNebula <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | ISRGrid/IUCC <br />
| style="border-bottom:1px dotted silver;" | IL <br />
| style="border-bottom:1px dotted silver;" | Yossi Baruch <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | DESY <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Patrick Furhmann <br />
| style="border-bottom:1px dotted silver;" | Paul Millar <br />
| style="border-bottom:1px dotted silver;" | dCache <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | Storage only<br />
|-<br />
| style="border-bottom:1px dotted silver;" | STFC/RAL <br />
| style="border-bottom:1px dotted silver;" | UK <br />
| style="border-bottom:1px dotted silver;" | Ian Collier <br />
| style="border-bottom:1px dotted silver;" | Frazer Barnsley, Alan Kyffin <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | STFC/RAL Harwell Science <br />
| style="border-bottom:1px dotted silver;" | UK <br />
| style="border-bottom:1px dotted silver;" | Jens Jensen <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | Castor <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | Cloud storage only<br />
|-<br />
| style="border-bottom:1px dotted silver;" | IFAE / PIC <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Victor Mendez <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | SAGrid <br />
| style="border-bottom:1px dotted silver;" | ZA <br />
| style="border-bottom:1px dotted silver;" | Bruce Becker <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | SRCE <br />
| style="border-bottom:1px dotted silver;" | HR <br />
| style="border-bottom:1px dotted silver;" | Emir Imamagic <br />
| style="border-bottom:1px dotted silver;" | Luko Gjenero <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | Planned - April 2014 <br />
| style="border-bottom:1px dotted silver;" | Status: Deploying OpenStack cluster, investigating storage options<br />
|-<br />
| style="border-bottom:1px dotted silver;" | GridPP <br />
| style="border-bottom:1px dotted silver;" | UK <br />
| style="border-bottom:1px dotted silver;" | Adam Huffman <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | Hosted at Imperial College<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CRNS/IN2P3-LAL <br />
| style="border-bottom:1px dotted silver;" | FR <br />
| style="border-bottom:1px dotted silver;" | Michel Jouvin <br />
| style="border-bottom:1px dotted silver;" | Mohammed Araj <br />
| style="border-bottom:1px dotted silver;" | StratusLab <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|}<br />
<br />
[[Category:Federated_Cloud]]</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Federated_Cloud_infrastructure_status&diff=93239
Federated Cloud infrastructure status
2017-02-16T14:14:58Z
<p>Verlato: </p>
<hr />
<div>{{Fedcloud_Menu}} {{TOC_right}} <br />
<br />
The purposes of this page are <br />
<br />
*providing a snapshot of the resources that are provided by the Federated Cloud infrastructure <br />
*providing information about the sites that are joining, or have expressed interest in joining the FedCloud <br />
*providing the list of sites supporting the fedcloud.egi.eu VO, which is the VO used to allow the evaluation of the FedCloud infrastructure by a given provider<br />
<br />
If you have any comments on the content of this page, please contact '''operations @ egi.eu'''. <br />
<br />
== Status of the Federated Cloud ==<br />
<br />
The table here shows all Resource Centres fully integrated into the Federated Cloud infrastructure and certified through the [[PROC09|EGI Resource Centre Registration and Certification]]. <br />
<br />
The status of all the services is monitored via the [https://cloudmon.egi.eu/nagios/ cloudmon.egi.eu nagios instance]. <br />
<br />
Details on Resource Centres availability and reliability are available on [http://argo.egi.eu/lavoisier/cloud_reports?accept=html ARGO]. <br />
<br />
Accounting data are available on the [http://accounting.egi.eu/cloud.php EGI Accounting Portal] or in the [http://accounting-devel.egi.eu/cloud.php accounting portal dev instance]. <br />
<br />
'''Last update: May 2016''' <br />
<br />
{| cellspacing="0" cellpadding="5" style="border:1px solid black; text-align:left;" class="wikitable sortable"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | Affiliation <br />
! style="border-bottom:1px solid black;" | CC <br />
! style="border-bottom:1px solid black;" | Main contact <br />
! style="border-bottom:1px solid black;" | Deputies <br />
! style="border-bottom:1px solid black;" | Host Site <br />
! style="border-bottom:1px solid black;" | Production status <br />
! style="border-bottom:1px solid black;" | CMF Version <br />
! style="border-bottom:1px solid black;" | Supporting fedcloud.egi.eu <br />
! style="border-bottom:1px solid black;" | Resource Endpoints <br />
! style="border-bottom:1px solid black;" | Committed Production Launch resources <br />
! style="border-bottom:1px solid black;" | VM maximum size (memory, cpu, storage)<br />
|-<br />
| style="border-bottom:1px dotted silver;" | 100 Percent IT Ltd <br />
| style="border-bottom:1px dotted silver;" | UK <br />
| style="border-bottom:1px dotted silver;" | David Blundell <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | 100IT <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Liberty&nbsp; <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://occi-api.100percentit.com:8787/occi1.1 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 120 Cores with 128 GB RAM <br />
<br />
- 16TB Shared storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | UNIZAR / BIFI <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Ruben Valles <br />
| style="border-bottom:1px dotted silver;" | Carlos Gimeno <br />
| style="border-bottom:1px dotted silver;" | BIFI <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | <br />
[http://server4-epsh.unizar.es:8787 OCCI] (Grizzly)<br> [http://server4-eupt.unizar.es:8787 OCCI] (Icehouse) <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
- 720 Cores with 740 GB RAM <br />
<br />
- 36 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
Name m1.xlarge, VCPUs 8, Root Disk 160 GB, Ephemeral Disk 0 GB, Total Disk 160 GB, RAM 16,384 MB <br />
<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Cyfronet (NGI PL) <br />
| style="border-bottom:1px dotted silver;" | PL <br />
| style="border-bottom:1px dotted silver;" | Tomasz Szepieniec <br />
| style="border-bottom:1px dotted silver;" | Patryk Lasoń, Łukasz Flis <br />
| style="border-bottom:1px dotted silver;" | [https://next.gocdb.eu/portal/index.php?Page_Type=Site&id=966 CYFRONET-CLOUD] <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Juno <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://control.cloud.cyfronet.pl:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 200 Cores with 400 GB RAM <br />
<br />
- 5 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 cores, 16 GB of RAM, 10 GB<br />
|-<br />
| style="border-bottom:1px dotted silver;" | FCTSG <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Rubén Díez <br />
| style="border-bottom:1px dotted silver;" | Iván Díaz <br />
| style="border-bottom:1px dotted silver;" | CESGA <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://cloud.cesga.es:3202/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | - 288 Cores with 592GB RAM (36 Xeon servers with 8 cores, 16GB RAM and local HD 500GB). <br />
- 160 Cores with 512GB RAM (8 Xeon servers with 20 cores, 64GB RAM and local HD 500GB). - Two data stores of 3TB and 700GB. This infrastructure is used to run several core services for EGI.eu and our capacity is compromised due to that. <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 cores, 16GB of RAM, local HD 470 GB<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CESNET <br />
| style="border-bottom:1px dotted silver;" | CZ <br />
| style="border-bottom:1px dotted silver;" | Miroslav Ruda <br />
| style="border-bottom:1px dotted silver;" | Boris Parak <br />
| style="border-bottom:1px dotted silver;" | CESNET-MetaCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://carach5.ics.muni.cz:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 416 Cores with 2.4 TB RAM <br />
<br />
- 56.6 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 32 cores, 185 GB of RAM, approx. 3 TB of attached storage<br />
|-<br />
| style="border-bottom:1px dotted silver;" | FZ Jülich <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Björn Hagemeier <br />
| style="border-bottom:1px dotted silver;" | Shahbaz Memon <br />
| style="border-bottom:1px dotted silver;" | FZJ <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack<br>Kilo <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://egi-cloud.zam.kfa-juelich.de:8787/ OCCI]<br>[https://swift.zam.kfa-juelich.de:8888/ CDMI]<br>[https://fsd-cloud.zam.kfa-juelich.de:5000/v2.0 OpenStack] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 216 Cores with 294 GB RAM <br />
<br />
- 50 TB Storage (~20TB object + ~30TB block) <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
Name:&nbsp;m16<br>16 Cores<br>16 GB RAM<br>Default Disk:&nbsp;20 (Root) + 20 (Ephemeral)<br>1TB attachable block storage<br>Open Ports: 22, 80, 443, 7000-7020 <br />
<br />
|-<br />
| style="border-bottom:1px dotted silver;" | GWDG <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Piotr Kasprzak <br />
| style="border-bottom:1px dotted silver;" | Ramin Yahyapour, Philipp Wieder <br />
| style="border-bottom:1px dotted silver;" | GoeGrid <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4 <br />
| style="border-bottom:1px dotted silver;" | NO <br />
| style="border-bottom:1px dotted silver;" | [https://occi.cloud.gwdg.de:3100/ OCCI]<br>[http://cdmi.cloud.gwdg.de:4001 CDMI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 192 Cores with 768 GB RAM <br />
<br />
&nbsp;- 40 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 64 cores, 240 GB RAM, 3 TB disk<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CSIC/IFCA <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Alvaro Lopez Garcia <br />
| style="border-bottom:1px dotted silver;" | Enol Fernandez, Pablo Orviz <br />
| style="border-bottom:1px dotted silver;" | IFCA-LCG2 <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | <br />
[https://cloud.ifca.es:8787/ OCCI]<br />
<br />
[https://cloud.ifca.es:8774/ OpenStack]<br />
<br />
[https://cephrgw.ifca.es:8080/ Swift]<br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
- 2368 Cores ( 32 nodes x 8 vcpus x 16GB RAM -&nbsp; 36 nodes x 24 vcpus x 48GM RAM - 34 nodes x 32 vcpus x 128GB RAM - 2 nodes x 80 vcpus x 1TB RAM )<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
cm4.4xlarge: 32 vCPUs, 160GB HD, 64GB RAM <br />
<br />
x1.20xlarge: 80 vCPUS, 100GB HD, 1TB RAM (upon request)<br />
<br />
|-<br />
| style="border-bottom:1px dotted silver;" | GRNET <br />
| style="border-bottom:1px dotted silver;" | GR <br />
| style="border-bottom:1px dotted silver;" | Kostas Koumantaros <br />
| style="border-bottom:1px dotted silver;" | Nikolaos Nikoloutsakos, Kyriakos Ginis <br />
| style="border-bottom:1px dotted silver;" | HG-09-Okeanos-Cloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | Synnefo <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://okeanos-occi2.hellasgrid.gr:9000/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 70 CPUs with 220 GB RAM <br />
<br />
<br> - 1 ΤΒ storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 24 Cores, 192GB RAM, 500GB HD&nbsp;<br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Miroslav Dobrucky, Jan Astalos<br />
| style="border-bottom:1px dotted silver;" | IISAS-FedCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nova2.ui.savba.sk:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 80 Cores with 1.5 GB RAM per core<br />
<br />
&nbsp;- 9 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.xlarge: 8 VCPUs, 16GB RAM, 160GB HD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Jan Astalos, Miroslav Dobrucky <br />
| style="border-bottom:1px dotted silver;" | IISAS-GPUCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Liberty <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nova3.ui.savba.sk:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 96 Cores with 4GB RAM per core, 12 GPUs K20m <br />
<br />
&nbsp;- 6 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <span style="color: rgb(51, 51, 51); font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 13px; line-height: 18.5714px; background-color: rgb(245, 245, 245);">gpu2cpu12</span><span style="font-size: 13.28px; line-height: 1.5em;">: 12 VCPUs, 48GB RAM, 200GB HD, 2GPU Tesla K20m</span><br />
|-<br />
| style="border-bottom:1px dotted silver;" rowspan="3" | INFN - Catania <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | IT <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | Roberto Barbera <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | Giuseppe La Rocca, Diego Scardaci <br />
| style="border-bottom:1px dotted silver;" | INFN-CATANIA-NEBULA <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nebula-server-01.ct.infn.it:9000/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 16 cores, 64 GB RAM <br />
<br />
- 5.4 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN-CATANIA-STACK <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://stack-server-01.ct.infn.it:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 16 Cores with 65 GB RAM <br />
<br />
- 16 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | VM maximum size = m1.xlarge (16384 MB, 8 VCPU, 160 GB)<br />
|-<br />
| style="border-bottom:1px dotted silver;" | INDIGO-CATANIA-STACK <br />
| style="border-bottom:1px dotted silver;" | Suspended (2017-02-01) <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [http://stack-server-02.ct.infn.it:8787 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 80 Cores with 320 GB RAM <br />
<br />
- 3TB object storage, 2TB block storage <br />
<br />
| style="border-bottom:1px dotted silver;" | VM maximum size = m1.xlarge (featuring 16GB RAM, 160 GB hard disk, 8CPU)<br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - Bari <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Marica Antonacci <br />
| style="border-bottom:1px dotted silver;" | Giacinto Donvito, Stefano Nicotri, Vincenzo Spinoso <br />
| style="border-bottom:1px dotted silver;" | RECAS-BARI (was:PRISMA-INFN-BARI) <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Juno <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [http://cloud.recas.ba.infn.it:8787/occi OCCI] (no CDMI endpoint provided at the moment, it will be back in the near future) <br />
| style="border-bottom:1px dotted silver;" | <br />
- 300 Cores with 600GB of RAM <br />
<br />
- 50 TB Storage. <br />
<br />
| style="border-bottom:1px dotted silver;" | Flavor "m1.xxlarge": 24 cores, 48GB RAM, 100 GB disk. Up to 500GB block storage can be attached on-demand.<br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN-PADOVA <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Marco Verlato <br />
| style="border-bottom:1px dotted silver;" | Federica Fanzago, Matteo Segatta <br />
| style="border-bottom:1px dotted silver;" | INFN-PADOVA-STACK <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Liberty <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://egi-cloud.pd.infn.it:8787/occi1.1/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 144 Cores with 283 GB RAM <br />
<br />
- 2.2TB of overall block storage and 1.8TB of ephemeral storage per compute node <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
m1.hpc: 24 VCPUs, 46GB RAM, 200GB HD, up to 1TB attachable block storage <br />
<br />
Up to 24 public IPs <br />
<br />
|-<br />
| style="border-bottom:1px dotted silver;" | TUBITAK ULAKBIM <br />
| style="border-bottom:1px dotted silver;" | TR <br />
| style="border-bottom:1px dotted silver;" | Feyza Eryol <br />
| style="border-bottom:1px dotted silver;" | Onur Temizsoylu <br />
| style="border-bottom:1px dotted silver;" | TR-FC1-ULAKBIM <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Sahara<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [http://fcctrl.ulakbim.gov.tr:8787/occi1.1/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 168 Cores with 896 GB RAM&nbsp; <br />
<br />
- 40 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | UKIM <br />
| style="border-bottom:1px dotted silver;" | MK <br />
| style="border-bottom:1px dotted silver;" | Boro Jakimovski <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | MK-04-FINKICLOUD <br />
| style="border-bottom:1px dotted silver;" | Suspended (2016-10-20) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nebula.finki.ukim.mk:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 100 Cores with 24 GB RAM - 1 TB Storage <br />
<br />
'''Information for MK-04-FINKICLOUD may be outdated.''' <br />
<br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | CETA-CIEMAT <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Guillermo Díaz Herrero, <br />
| style="border-bottom:1px dotted silver;" | Miguel Ángel Díaz, Abel Paz <br />
| style="border-bottom:1px dotted silver;" | CETA-GRID <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack IceHouse <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://controller.ceta-ciemat.es:8787 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 184 Cores with 224 GB RAM <br />
<br />
- 5.3 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.xxlarge: 8 VCPUs, 12GB RAM, 40GB HD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | IPHC <br />
| style="border-bottom:1px dotted silver;" | FR <br />
| style="border-bottom:1px dotted silver;" | Jerome Pansanel <br />
| style="border-bottom:1px dotted silver;" | Sebastien Geiger, Vincent Legoll <br />
| style="border-bottom:1px dotted silver;" | IN2P3-IRES <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://sbgcloud.in2p3.fr:8787 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 192 Cores with 1232 GB RAM <br />
<br />
- 5 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.2xlarge (CPU: 16, RAM: 32 GB, disk: 320 GB) <br> Monitoring&nbsp;: <br />
*https://cloudmon.egi.eu/nagios/cgi-bin/extinfo.cgi?type=1&amp;host=sbgbdii1.in2p3.fr <br />
*https://cloudmon.egi.eu/nagios/cgi-bin/extinfo.cgi?type=1&amp;host=sbgcloud.in2p3.fr<br />
<br />
|-<br />
| style="border-bottom:1px dotted silver;" | NCG-INGRID-PT / LIP <br />
| style="border-bottom:1px dotted silver;" | PT <br />
| style="border-bottom:1px dotted silver;" | Mario David <br />
| style="border-bottom:1px dotted silver;" | Joao Pina, Joao Martins <br />
| style="border-bottom:1px dotted silver;" | NCG-INGRID-PT <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Openstack Mitaka<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
- 80 Cores with 192 GB RAM <br />
<br />
- 3 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | Name m1.xlarge, VCPUs 8, Root Disk 160 GB, Ephemeral Disk 0 GB, Total Disk 160 GB, RAM 16,384 MB<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Polythecnic University of Valencia, I3M <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Ignacio Blanquer <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | UPV-GRyCAP <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://fc-one.i3m.upv.es:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 128 Cores with 192 GB RAM <br />
<br />
- 5 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 4 VCPUS, 16GB RAM, 160GB HD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Fraunhofer SCAI <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Horst Schwichtenberg <br />
| style="border-bottom:1px dotted silver;" | Andre Gemuend <br />
| style="border-bottom:1px dotted silver;" | SCAI <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka<br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://fc.scai.fraunhofer.de:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 128 physical cores + HT, 244 GB RAM <br />
<br />
- 20 TB Storage (Glance &amp; Cinder) <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 VCPUS, 16GB RAM, 160GB HD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | BELSPO <br />
| style="border-bottom:1px dotted silver;" | BE<br> <br />
| style="border-bottom:1px dotted silver;" | Stephane GERARD <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | BEgrid-BELNET <br />
| style="border-bottom:1px dotted silver;" | Certified (2016-12) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4 <br />
| style="border-bottom:1px dotted silver;" | YES<br> <br />
| style="border-bottom:1px dotted silver;" | [https://rocci.iihe.ac.be:11443/ OCCI]<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
- 160 physical cores + HT, 512 GB RAM <br />
<br />
- 10 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 16 VCPUs, 32GB of RAM, local HD 200 GB<br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Jan Astalos, Miroslav Dobrucky <br />
| style="border-bottom:1px dotted silver;" | IISAS-Nebula <br />
| style="border-bottom:1px dotted silver;" | Certified (2017-01-25) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | [https://nebula2.ui.savba.sk:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | 16 Cores with 64GB RAM and 2 GPUs K20m - Storage 147GB <br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 14 CPU cores, 2GPU, 56GB of RAM, 830GB HD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | RO <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | CLOUDIFIN <br />
| style="border-bottom:1px dotted silver;" | Uncertified <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | <br><br />
|}<br />
<br />
== Integrating resource providers ==<br />
<br />
Last update: may 2015 <br />
<br />
Sites that have a valid GOCDB entry, should also have at least one service type listed and monitored via cloudmon.egi.eu. <br />
<br />
{| cellspacing="0" cellpadding="5" class="wikitable sortable" style="border:1px solid black; text-align:left;"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | Affiliation <br />
! style="border-bottom:1px solid black;" | CC <br />
! style="border-bottom:1px solid black;" | Main Contact <br />
! style="border-bottom:1px solid black;" | Deputies <br />
! style="border-bottom:1px solid black;" | Host Site <br />
! style="border-bottom:1px solid black;" | CMF <br />
! style="border-bottom:1px solid black;" | Certification <br />
! style="border-bottom:1px solid black;" | Resource Endpoints <br />
! style="border-bottom:1px solid black;" | Comment<br />
|-<br />
| style="border-bottom:1px dotted silver;" | KISTI <br />
| style="border-bottom:1px dotted silver;" | KR <br />
| style="border-bottom:1px dotted silver;" | Soonwook Hwang <br />
| style="border-bottom:1px dotted silver;" | Sangwan Kim, Taesang Huh, Jae-Hyuck Kwak <br />
| style="border-bottom:1px dotted silver;" | KR-KISTI-CLOUD <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://fccont.kisti.re.kr:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | 64 cores with 256 GB RAM and 6TB HDD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CSC <br />
| style="border-bottom:1px dotted silver;" | FI <br />
| style="border-bottom:1px dotted silver;" | Jura Tarus <br />
| style="border-bottom:1px dotted silver;" | Luís Alves, Ulf Tigerstedt, Kalle Happonen <br />
| style="border-bottom:1px dotted silver;" | CSC-Cloud <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | Status: Testing resource integration<br />
|}<br />
<br />
== Interested resource providers ==<br />
<br />
{| cellspacing="0" cellpadding="5" class="wikitable sortable" style="border:1px solid black; text-align:left;"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | Affiliation <br />
! style="border-bottom:1px solid black;" | CC <br />
! style="border-bottom:1px solid black;" | Representative <br />
! style="border-bottom:1px solid black;" | Deputies <br />
! style="border-bottom:1px solid black;" | CMF <br />
! style="border-bottom:1px solid black;" | Integration plans <br />
! style="border-bottom:1px solid black;" | Comment<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CSIC-EBD <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Jesús Marco <br />
| style="border-bottom:1px dotted silver;" | Fernando Aguilar, Juan Carlos Sexto <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | to integrate before 30th June <br />
| style="border-bottom:1px dotted silver;" | ~1000 cores (500 cores initially available in FedCloud), 1PB of storage (around 50% devoted to support FedCloud)<br />
|-<br />
| style="border-bottom:1px dotted silver;" | IICT-BAS <br />
| style="border-bottom:1px dotted silver;" | BG <br />
| style="border-bottom:1px dotted silver;" | Emanouil Atanassov <br />
| style="border-bottom:1px dotted silver;" | Todor Gurov <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - Napoli <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Silvio Pardi <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - CNAF <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Cristina Aiftimiei <br />
| style="border-bottom:1px dotted silver;" | Davide Salomoni, Diego Michelotto <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - Torino <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Andrea Guarise <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | CIEMAT <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Rubio Montero <br />
| style="border-bottom:1px dotted silver;" | Rafael Mayo García, Manuel Aurelio Rodríguez Pascual <br />
| style="border-bottom:1px dotted silver;" | OpenNebula <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | SURFsara <br />
| style="border-bottom:1px dotted silver;" | NL <br />
| style="border-bottom:1px dotted silver;" | Ron Trompert <br />
| style="border-bottom:1px dotted silver;" | Maurice Bouwhuis, Machiel Jansen <br />
| style="border-bottom:1px dotted silver;" | OpenNebula <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | ISRGrid/IUCC <br />
| style="border-bottom:1px dotted silver;" | IL <br />
| style="border-bottom:1px dotted silver;" | Yossi Baruch <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | DESY <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Patrick Furhmann <br />
| style="border-bottom:1px dotted silver;" | Paul Millar <br />
| style="border-bottom:1px dotted silver;" | dCache <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | Storage only<br />
|-<br />
| style="border-bottom:1px dotted silver;" | STFC/RAL <br />
| style="border-bottom:1px dotted silver;" | UK <br />
| style="border-bottom:1px dotted silver;" | Ian Collier <br />
| style="border-bottom:1px dotted silver;" | Frazer Barnsley, Alan Kyffin <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | STFC/RAL Harwell Science <br />
| style="border-bottom:1px dotted silver;" | UK <br />
| style="border-bottom:1px dotted silver;" | Jens Jensen <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | Castor <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | Cloud storage only<br />
|-<br />
| style="border-bottom:1px dotted silver;" | IFAE / PIC <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Victor Mendez <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | SAGrid <br />
| style="border-bottom:1px dotted silver;" | ZA <br />
| style="border-bottom:1px dotted silver;" | Bruce Becker <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | SRCE <br />
| style="border-bottom:1px dotted silver;" | HR <br />
| style="border-bottom:1px dotted silver;" | Emir Imamagic <br />
| style="border-bottom:1px dotted silver;" | Luko Gjenero <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | Planned - April 2014 <br />
| style="border-bottom:1px dotted silver;" | Status: Deploying OpenStack cluster, investigating storage options<br />
|-<br />
| style="border-bottom:1px dotted silver;" | GridPP <br />
| style="border-bottom:1px dotted silver;" | UK <br />
| style="border-bottom:1px dotted silver;" | Adam Huffman <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | Hosted at Imperial College<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CRNS/IN2P3-LAL <br />
| style="border-bottom:1px dotted silver;" | FR <br />
| style="border-bottom:1px dotted silver;" | Michel Jouvin <br />
| style="border-bottom:1px dotted silver;" | Mohammed Araj <br />
| style="border-bottom:1px dotted silver;" | StratusLab <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|}<br />
<br />
[[Category:Federated_Cloud]]</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Federated_Cloud_infrastructure_status&diff=93069
Federated Cloud infrastructure status
2017-02-08T19:09:56Z
<p>Verlato: </p>
<hr />
<div>{{Fedcloud_Menu}} {{TOC_right}} <br />
<br />
The purposes of this page are <br />
<br />
*providing a snapshot of the resources that are provided by the Federated Cloud infrastructure <br />
*providing information about the sites that are joining, or have expressed interest in joining the FedCloud <br />
*providing the list of sites supporting the fedcloud.egi.eu VO, which is the VO used to allow the evaluation of the FedCloud infrastructure by a given provider<br />
<br />
If you have any comments on the content of this page, please contact '''operations @ egi.eu'''. <br />
<br />
== Status of the Federated Cloud ==<br />
<br />
The table here shows all Resource Centres fully integrated into the Federated Cloud infrastructure and certified through the [[PROC09|EGI Resource Centre Registration and Certification]]. <br />
<br />
The status of all the services is monitored via the [https://cloudmon.egi.eu/nagios/ cloudmon.egi.eu nagios instance]. <br />
<br />
Details on Resource Centres availability and reliability are available on [http://argo.egi.eu/lavoisier/cloud_reports?accept=html ARGO]. <br />
<br />
Accounting data are available on the [http://accounting.egi.eu/cloud.php EGI Accounting Portal] or in the [http://accounting-devel.egi.eu/cloud.php accounting portal dev instance]. <br />
<br />
'''Last update: May 2016''' <br />
<br />
{| cellspacing="0" cellpadding="5" style="border:1px solid black; text-align:left;" class="wikitable sortable"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | Affiliation <br />
! style="border-bottom:1px solid black;" | CC <br />
! style="border-bottom:1px solid black;" | Main contact <br />
! style="border-bottom:1px solid black;" | Deputies <br />
! style="border-bottom:1px solid black;" | Host Site <br />
! style="border-bottom:1px solid black;" | Production status <br />
! style="border-bottom:1px solid black;" | CMF Version <br />
! style="border-bottom:1px solid black;" | Supporting fedcloud.egi.eu <br />
! style="border-bottom:1px solid black;" | Resource Endpoints <br />
! style="border-bottom:1px solid black;" | Committed Production Launch resources <br />
! style="border-bottom:1px solid black;" | VM maximum size (memory, cpu, storage)<br />
|-<br />
| style="border-bottom:1px dotted silver;" | 100 Percent IT Ltd <br />
| style="border-bottom:1px dotted silver;" | UK <br />
| style="border-bottom:1px dotted silver;" | David Blundell <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | 100IT <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Liberty&nbsp; <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://occi-api.100percentit.com:8787/occi1.1 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 120 Cores with 128 GB RAM <br />
<br />
- 16TB Shared storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | UNIZAR / BIFI <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Ruben Valles <br />
| style="border-bottom:1px dotted silver;" | Carlos Gimeno <br />
| style="border-bottom:1px dotted silver;" | BIFI <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | <br />
[http://server4-epsh.unizar.es:8787 OCCI] (Grizzly)<br> [http://server4-eupt.unizar.es:8787 OCCI] (Icehouse) <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
- 720 Cores with 740 GB RAM <br />
<br />
- 36 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
Name m1.xlarge, VCPUs 8, Root Disk 160 GB, Ephemeral Disk 0 GB, Total Disk 160 GB, RAM 16,384 MB <br />
<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Cyfronet (NGI PL) <br />
| style="border-bottom:1px dotted silver;" | PL <br />
| style="border-bottom:1px dotted silver;" | Tomasz Szepieniec <br />
| style="border-bottom:1px dotted silver;" | Patryk Lasoń, Łukasz Flis <br />
| style="border-bottom:1px dotted silver;" | [https://next.gocdb.eu/portal/index.php?Page_Type=Site&id=966 CYFRONET-CLOUD] <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Juno <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://control.cloud.cyfronet.pl:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 200 Cores with 400 GB RAM <br />
<br />
- 5 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 cores, 16 GB of RAM, 10 GB<br />
|-<br />
| style="border-bottom:1px dotted silver;" | FCTSG <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Rubén Díez <br />
| style="border-bottom:1px dotted silver;" | Iván Díaz <br />
| style="border-bottom:1px dotted silver;" | CESGA <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://cloud.cesga.es:3202/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | - 288 Cores with 592GB RAM (36 Xeon servers with 8 cores, 16GB RAM and local HD 500GB). <br />
- 160 Cores with 512GB RAM (8 Xeon servers with 20 cores, 64GB RAM and local HD 500GB). - Two data stores of 3TB and 700GB. This infrastructure is used to run several core services for EGI.eu and our capacity is compromised due to that. <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 cores, 16GB of RAM, local HD 470 GB<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CESNET <br />
| style="border-bottom:1px dotted silver;" | CZ <br />
| style="border-bottom:1px dotted silver;" | Miroslav Ruda <br />
| style="border-bottom:1px dotted silver;" | Boris Parak <br />
| style="border-bottom:1px dotted silver;" | CESNET-MetaCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://carach5.ics.muni.cz:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 416 Cores with 2.4 TB RAM <br />
<br />
- 56.6 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 32 cores, 185 GB of RAM, approx. 3 TB of attached storage<br />
|-<br />
| style="border-bottom:1px dotted silver;" | FZ Jülich <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Björn Hagemeier <br />
| style="border-bottom:1px dotted silver;" | Shahbaz Memon <br />
| style="border-bottom:1px dotted silver;" | FZJ <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack<br>Kilo <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://egi-cloud.zam.kfa-juelich.de:8787/ OCCI]<br>[https://swift.zam.kfa-juelich.de:8888/ CDMI]<br>[https://fsd-cloud.zam.kfa-juelich.de:5000/v2.0 OpenStack] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 216 Cores with 294 GB RAM <br />
<br />
- 50 TB Storage (~20TB object + ~30TB block) <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
Name:&nbsp;m16<br>16 Cores<br>16 GB RAM<br>Default Disk:&nbsp;20 (Root) + 20 (Ephemeral)<br>1TB attachable block storage<br>Open Ports: 22, 80, 443, 7000-7020 <br />
<br />
|-<br />
| style="border-bottom:1px dotted silver;" | GWDG <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Piotr Kasprzak <br />
| style="border-bottom:1px dotted silver;" | Ramin Yahyapour, Philipp Wieder <br />
| style="border-bottom:1px dotted silver;" | GoeGrid <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4 <br />
| style="border-bottom:1px dotted silver;" | NO <br />
| style="border-bottom:1px dotted silver;" | [https://occi.cloud.gwdg.de:3100/ OCCI]<br>[http://cdmi.cloud.gwdg.de:4001 CDMI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 192 Cores with 768 GB RAM <br />
<br />
&nbsp;- 40 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 64 cores, 240 GB RAM, 3 TB disk<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CSIC/IFCA <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Alvaro Lopez Garcia <br />
| style="border-bottom:1px dotted silver;" | Enol Fernandez, Pablo Orviz <br />
| style="border-bottom:1px dotted silver;" | IFCA-LCG2 <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | <br />
[https://cloud.ifca.es:8787/ OCCI]<br />
<br />
[https://cloud.ifca.es:8774/ OpenStack]<br />
<br />
[https://cephrgw.ifca.es:8080/ Swift]<br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
- 2368 Cores ( 32 nodes x 8 vcpus x 16GB RAM -&nbsp; 36 nodes x 24 vcpus x 48GM RAM - 34 nodes x 32 vcpus x 128GB RAM - 2 nodes x 80 vcpus x 1TB RAM )<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
cm4.4xlarge: 32 vCPUs, 160GB HD, 64GB RAM <br />
<br />
x1.20xlarge: 80 vCPUS, 100GB HD, 1TB RAM (upon request)<br />
<br />
|-<br />
| style="border-bottom:1px dotted silver;" | GRNET <br />
| style="border-bottom:1px dotted silver;" | GR <br />
| style="border-bottom:1px dotted silver;" | Kostas Koumantaros <br />
| style="border-bottom:1px dotted silver;" | Nikolaos Nikoloutsakos, Kyriakos Ginis <br />
| style="border-bottom:1px dotted silver;" | HG-09-Okeanos-Cloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | Synnefo <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://okeanos-occi2.hellasgrid.gr:9000/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 70 CPUs with 220 GB RAM <br />
<br />
<br> - 1 ΤΒ storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 24 Cores, 192GB RAM, 500GB HD&nbsp;<br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Miroslav Dobrucky <br />
| style="border-bottom:1px dotted silver;" | IISAS-FedCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nova2.ui.savba.sk:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 176 Cores with 3GB RAM <br />
<br />
&nbsp;- 50 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.xlarge: 8 VCPUs, 16GB RAM, 160GB HD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Miroslav Dobrucky <br />
| style="border-bottom:1px dotted silver;" | IISAS-GPUCloud <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Liberty <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nova3.ui.savba.sk:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
&nbsp;- 96 Cores with 4GB RAM per core, 12 GPUs K20m <br />
<br />
&nbsp;- 6 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <span style="color: rgb(51, 51, 51); font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 13px; line-height: 18.5714px; background-color: rgb(245, 245, 245);">gpu2cpu12</span><span style="font-size: 13.28px; line-height: 1.5em;">: 12 VCPUs, 48GB RAM, 200GB HD, 2GPU Tesla K20m</span><br />
|-<br />
| style="border-bottom:1px dotted silver;" rowspan="3" | INFN - Catania <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | IT <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | Roberto Barbera <br />
| style="border-bottom:1px dotted silver;" rowspan="3" | Giuseppe La Rocca, Diego Scardaci <br />
| style="border-bottom:1px dotted silver;" | INFN-CATANIA-NEBULA <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nebula-server-01.ct.infn.it:9000/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 16 cores, 64 GB RAM <br />
<br />
- 5.4 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN-CATANIA-STACK <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://stack-server-01.ct.infn.it:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 16 Cores with 65 GB RAM <br />
<br />
- 16 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | VM maximum size = m1.xlarge (16384 MB, 8 VCPU, 160 GB)<br />
|-<br />
| style="border-bottom:1px dotted silver;" | INDIGO-CATANIA-STACK <br />
| style="border-bottom:1px dotted silver;" | Suspended (2017-02-01) <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [http://stack-server-02.ct.infn.it:8787 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 80 Cores with 320 GB RAM <br />
<br />
- 3TB object storage, 2TB block storage <br />
<br />
| style="border-bottom:1px dotted silver;" | VM maximum size = m1.xlarge (featuring 16GB RAM, 160 GB hard disk, 8CPU)<br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - Bari <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Marica Antonacci <br />
| style="border-bottom:1px dotted silver;" | Giacinto Donvito, Stefano Nicotri, Vincenzo Spinoso <br />
| style="border-bottom:1px dotted silver;" | RECAS-BARI (was:PRISMA-INFN-BARI) <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Juno <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [http://cloud.recas.ba.infn.it:8787/occi OCCI] (no CDMI endpoint provided at the moment, it will be back in the near future) <br />
| style="border-bottom:1px dotted silver;" | <br />
- 300 Cores with 600GB of RAM <br />
<br />
- 50 TB Storage. <br />
<br />
| style="border-bottom:1px dotted silver;" | Flavor "m1.xxlarge": 24 cores, 48GB RAM, 100 GB disk. Up to 500GB block storage can be attached on-demand.<br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN-PADOVA <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Marco Verlato <br />
| style="border-bottom:1px dotted silver;" | Federica Fanzago <br />
| style="border-bottom:1px dotted silver;" | INFN-PADOVA-STACK <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Liberty <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://egi-cloud.pd.infn.it:8787/occi1.1/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 144 Cores with 283 GB RAM <br />
<br />
- 3.7TB of block storage (max 1TB per tenant) and 1.8TB of ephemeral storage per compute node <br />
<br />
| style="border-bottom:1px dotted silver;" | <br />
m1.hpc: 24 VCPUs, 46GB RAM, 200GB HD, up to 1TB attachable block storage <br />
<br />
Up to 24 public IPs <br />
<br />
|-<br />
| style="border-bottom:1px dotted silver;" | TUBITAK ULAKBIM <br />
| style="border-bottom:1px dotted silver;" | TR <br />
| style="border-bottom:1px dotted silver;" | Feyza Eryol <br />
| style="border-bottom:1px dotted silver;" | Onur Temizsoylu <br />
| style="border-bottom:1px dotted silver;" | TR-FC1-ULAKBIM <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Sahara<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [http://fcctrl.ulakbim.gov.tr:8787/occi1.1/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 168 Cores with 896 GB RAM&nbsp; <br />
<br />
- 40 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | UKIM <br />
| style="border-bottom:1px dotted silver;" | MK <br />
| style="border-bottom:1px dotted silver;" | Boro Jakimovski <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | MK-04-FINKICLOUD <br />
| style="border-bottom:1px dotted silver;" | Suspended (2016-10-20) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://nebula.finki.ukim.mk:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 100 Cores with 24 GB RAM - 1 TB Storage <br />
<br />
'''Information for MK-04-FINKICLOUD may be outdated.''' <br />
<br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | CETA-CIEMAT <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Guillermo Díaz Herrero, <br />
| style="border-bottom:1px dotted silver;" | Miguel Ángel Díaz, Abel Paz <br />
| style="border-bottom:1px dotted silver;" | CETA-GRID <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack IceHouse <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://controller.ceta-ciemat.es:8787 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 184 Cores with 224 GB RAM <br />
<br />
- 5.3 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.xxlarge: 8 VCPUs, 12GB RAM, 40GB HD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | IPHC <br />
| style="border-bottom:1px dotted silver;" | FR <br />
| style="border-bottom:1px dotted silver;" | Jerome Pansanel <br />
| style="border-bottom:1px dotted silver;" | Sebastien Geiger, Vincent Legoll <br />
| style="border-bottom:1px dotted silver;" | IN2P3-IRES <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack Mitaka<br> <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://sbgcloud.in2p3.fr:8787 OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 192 Cores with 1232 GB RAM <br />
<br />
- 5 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | m1.2xlarge (CPU: 16, RAM: 32 GB, disk: 320 GB) <br> Monitoring&nbsp;: <br />
*https://cloudmon.egi.eu/nagios/cgi-bin/extinfo.cgi?type=1&amp;host=sbgbdii1.in2p3.fr <br />
*https://cloudmon.egi.eu/nagios/cgi-bin/extinfo.cgi?type=1&amp;host=sbgcloud.in2p3.fr<br />
<br />
|-<br />
| style="border-bottom:1px dotted silver;" | NCG-INGRID-PT / LIP <br />
| style="border-bottom:1px dotted silver;" | PT <br />
| style="border-bottom:1px dotted silver;" | Mario David <br />
| style="border-bottom:1px dotted silver;" | Joao Pina, Joao Martins <br />
| style="border-bottom:1px dotted silver;" | NCG-INGRID-PT <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | Openstack Mitaka<br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
- 80 Cores with 192 GB RAM <br />
<br />
- 3 TB Storage<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | Name m1.xlarge, VCPUs 8, Root Disk 160 GB, Ephemeral Disk 0 GB, Total Disk 160 GB, RAM 16,384 MB<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Polythecnic University of Valencia, I3M <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Ignacio Blanquer <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | UPV-GRyCAP <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5 <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://fc-one.i3m.upv.es:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 128 Cores with 192 GB RAM <br />
<br />
- 5 TB Storage <br />
<br />
| style="border-bottom:1px dotted silver;" | 4 VCPUS, 16GB RAM, 160GB HD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | Fraunhofer SCAI <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Horst Schwichtenberg <br />
| style="border-bottom:1px dotted silver;" | Andre Gemuend <br />
| style="border-bottom:1px dotted silver;" | SCAI <br />
| style="border-bottom:1px dotted silver;" | Certified <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://fc.scai.fraunhofer.de:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | <br />
- 128 physical cores + HT, 244 GB RAM <br />
<br />
- 20 TB Storage (Glance &amp; Cinder) <br />
<br />
| style="border-bottom:1px dotted silver;" | 8 VCPUS, 16GB RAM, 160GB HD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | BELSPO <br />
| style="border-bottom:1px dotted silver;" | BE<br> <br />
| style="border-bottom:1px dotted silver;" | Stephane GERARD <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | BEgrid-BELNET <br />
| style="border-bottom:1px dotted silver;" | Certified (2016-12) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 4 <br />
| style="border-bottom:1px dotted silver;" | YES<br> <br />
| style="border-bottom:1px dotted silver;" | [https://rocci.iihe.ac.be:11443/ OCCI]<br> <br />
| style="border-bottom:1px dotted silver;" | <br />
- 160 physical cores + HT, 512 GB RAM <br />
<br />
- 10 TB Storage <br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | <br><br />
|-<br />
| style="border-bottom:1px dotted silver;" | II SAS <br />
| style="border-bottom:1px dotted silver;" | SK <br />
| style="border-bottom:1px dotted silver;" | Viet Tran <br />
| style="border-bottom:1px dotted silver;" | Jan Astalos, Miroslav Dobrucky <br />
| style="border-bottom:1px dotted silver;" | IISAS-Nebula <br />
| style="border-bottom:1px dotted silver;" | Certified (2017-01-25) <br />
| style="border-bottom:1px dotted silver;" | OpenNebula 5 <br />
| style="border-bottom:1px dotted silver;" | No <br />
| style="border-bottom:1px dotted silver;" | [https://nebula2.ui.savba.sk:11443/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | 16 Cores with 64GB RAM and 2 GPUs K20m - Storage 147GB <br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | 16 CPU cores, 2GPU, 64GB of RAM, 830GB HD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | RO <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | CLOUDIFIN <br />
| style="border-bottom:1px dotted silver;" | Uncertified <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br> <br />
| style="border-bottom:1px dotted silver;" | <br />
<br> <br />
<br />
| style="border-bottom:1px dotted silver;" | <br><br />
|}<br />
<br />
== Integrating resource providers ==<br />
<br />
Last update: may 2015 <br />
<br />
Sites that have a valid GOCDB entry, should also have at least one service type listed and monitored via cloudmon.egi.eu. <br />
<br />
{| cellspacing="0" cellpadding="5" class="wikitable sortable" style="border:1px solid black; text-align:left;"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | Affiliation <br />
! style="border-bottom:1px solid black;" | CC <br />
! style="border-bottom:1px solid black;" | Main Contact <br />
! style="border-bottom:1px solid black;" | Deputies <br />
! style="border-bottom:1px solid black;" | Host Site <br />
! style="border-bottom:1px solid black;" | CMF <br />
! style="border-bottom:1px solid black;" | Certification <br />
! style="border-bottom:1px solid black;" | Resource Endpoints <br />
! style="border-bottom:1px solid black;" | Comment<br />
|-<br />
| style="border-bottom:1px dotted silver;" | KISTI <br />
| style="border-bottom:1px dotted silver;" | KR <br />
| style="border-bottom:1px dotted silver;" | Soonwook Hwang <br />
| style="border-bottom:1px dotted silver;" | Sangwan Kim, Taesang Huh, Jae-Hyuck Kwak <br />
| style="border-bottom:1px dotted silver;" | KR-KISTI-CLOUD <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | [https://fccont.kisti.re.kr:8787/ OCCI] <br />
| style="border-bottom:1px dotted silver;" | 64 cores with 256 GB RAM and 6TB HDD<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CSC <br />
| style="border-bottom:1px dotted silver;" | FI <br />
| style="border-bottom:1px dotted silver;" | Jura Tarus <br />
| style="border-bottom:1px dotted silver;" | Luís Alves, Ulf Tigerstedt, Kalle Happonen <br />
| style="border-bottom:1px dotted silver;" | CSC-Cloud <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | YES <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | Status: Testing resource integration<br />
|}<br />
<br />
== Interested resource providers ==<br />
<br />
{| cellspacing="0" cellpadding="5" class="wikitable sortable" style="border:1px solid black; text-align:left;"<br />
|- style="background:lightgray;"<br />
! style="border-bottom:1px solid black;" | Affiliation <br />
! style="border-bottom:1px solid black;" | CC <br />
! style="border-bottom:1px solid black;" | Representative <br />
! style="border-bottom:1px solid black;" | Deputies <br />
! style="border-bottom:1px solid black;" | CMF <br />
! style="border-bottom:1px solid black;" | Integration plans <br />
! style="border-bottom:1px solid black;" | Comment<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CSIC-EBD <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Jesús Marco <br />
| style="border-bottom:1px dotted silver;" | Fernando Aguilar, Juan Carlos Sexto <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | to integrate before 30th June <br />
| style="border-bottom:1px dotted silver;" | ~1000 cores (500 cores initially available in FedCloud), 1PB of storage (around 50% devoted to support FedCloud)<br />
|-<br />
| style="border-bottom:1px dotted silver;" | IICT-BAS <br />
| style="border-bottom:1px dotted silver;" | BG <br />
| style="border-bottom:1px dotted silver;" | Emanouil Atanassov <br />
| style="border-bottom:1px dotted silver;" | Todor Gurov <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - Napoli <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Silvio Pardi <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - CNAF <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Cristina Aiftimiei <br />
| style="border-bottom:1px dotted silver;" | Davide Salomoni, Diego Michelotto <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | INFN - Torino <br />
| style="border-bottom:1px dotted silver;" | IT <br />
| style="border-bottom:1px dotted silver;" | Andrea Guarise <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | CIEMAT <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Rubio Montero <br />
| style="border-bottom:1px dotted silver;" | Rafael Mayo García, Manuel Aurelio Rodríguez Pascual <br />
| style="border-bottom:1px dotted silver;" | OpenNebula <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | SURFsara <br />
| style="border-bottom:1px dotted silver;" | NL <br />
| style="border-bottom:1px dotted silver;" | Ron Trompert <br />
| style="border-bottom:1px dotted silver;" | Maurice Bouwhuis, Machiel Jansen <br />
| style="border-bottom:1px dotted silver;" | OpenNebula <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | ISRGrid/IUCC <br />
| style="border-bottom:1px dotted silver;" | IL <br />
| style="border-bottom:1px dotted silver;" | Yossi Baruch <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | DESY <br />
| style="border-bottom:1px dotted silver;" | DE <br />
| style="border-bottom:1px dotted silver;" | Patrick Furhmann <br />
| style="border-bottom:1px dotted silver;" | Paul Millar <br />
| style="border-bottom:1px dotted silver;" | dCache <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | Storage only<br />
|-<br />
| style="border-bottom:1px dotted silver;" | STFC/RAL <br />
| style="border-bottom:1px dotted silver;" | UK <br />
| style="border-bottom:1px dotted silver;" | Ian Collier <br />
| style="border-bottom:1px dotted silver;" | Frazer Barnsley, Alan Kyffin <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | STFC/RAL Harwell Science <br />
| style="border-bottom:1px dotted silver;" | UK <br />
| style="border-bottom:1px dotted silver;" | Jens Jensen <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | Castor <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | Cloud storage only<br />
|-<br />
| style="border-bottom:1px dotted silver;" | IFAE / PIC <br />
| style="border-bottom:1px dotted silver;" | ES <br />
| style="border-bottom:1px dotted silver;" | Victor Mendez <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | SAGrid <br />
| style="border-bottom:1px dotted silver;" | ZA <br />
| style="border-bottom:1px dotted silver;" | Bruce Becker <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|-<br />
| style="border-bottom:1px dotted silver;" | SRCE <br />
| style="border-bottom:1px dotted silver;" | HR <br />
| style="border-bottom:1px dotted silver;" | Emir Imamagic <br />
| style="border-bottom:1px dotted silver;" | Luko Gjenero <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | Planned - April 2014 <br />
| style="border-bottom:1px dotted silver;" | Status: Deploying OpenStack cluster, investigating storage options<br />
|-<br />
| style="border-bottom:1px dotted silver;" | GridPP <br />
| style="border-bottom:1px dotted silver;" | UK <br />
| style="border-bottom:1px dotted silver;" | Adam Huffman <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | OpenStack <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | Hosted at Imperial College<br />
|-<br />
| style="border-bottom:1px dotted silver;" | CRNS/IN2P3-LAL <br />
| style="border-bottom:1px dotted silver;" | FR <br />
| style="border-bottom:1px dotted silver;" | Michel Jouvin <br />
| style="border-bottom:1px dotted silver;" | Mohammed Araj <br />
| style="border-bottom:1px dotted silver;" | StratusLab <br />
| style="border-bottom:1px dotted silver;" | <br />
| style="border-bottom:1px dotted silver;" | <br />
|}<br />
<br />
[[Category:Federated_Cloud]]</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Competence_centre_MoBrain&diff=89839
Competence centre MoBrain
2016-09-21T15:42:32Z
<p>Verlato: /* How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016) */</p>
<hr />
<div>{{Template:EGI-Engage_CC}} <br />
<br />
<br />
{{TOC_right}} <br />
<br />
[[Image:Mobrain.png|300px|right]]<br />
<br />
<br />
'''MoBrain: A Competence Center to Serve Translational Research from Molecule to Brain'''<br />
<br />
<br> <br />
<br> <br />
<br />
'''CC Coordinator:''' Alexandre M.J.J. Bonvin<br />
<br />
'''CC Coordinator deputy:''' Antonio Rosato<br />
<br />
'''CC members' list:''' cc-mobrain AT mailman.egi.eu <br />
<br />
'''CC meetings:''' https://indico.egi.eu/indico/categoryDisplay.py?categId=145 <br />
<br />
<br> <br />
<br />
<br />
== Introduction ==<br />
<br />
Today’s translational research era calls for innovative solutions to enable researchers and clinicians to build bridges between the microscopic (molecular) and macroscopic (human) scales. This requires building both transversal and vertical connections between techniques, researchers and clinicians to provide them with an optimal e-Science toolbox to tackle societal challenges related to health. <br />
<br />
The main objective of the MoBrain Competence Center (CC) is to lower barriers for scientists to access modern e-Science solutions from micro to macro scales. MoBrain builds on grid- and cloud-based infrastructures and on the existing expertise available within WeNMR (www.wenmr.eu), N4U (neugrid4you.eu), and technology providers (NGIs and other institutions, OSG). This initiative aims to serve its user communities, related ESFRI projects (e.g. INSTRUCT) and in the long term the Human Brain Project (FET Flagship), and strengthen the EGI services offering. <br />
<br />
By integrating molecular structural biology and medical imaging services and data, MoBrain will kick-start the development of a larger, integrated, global science virtual research environment for life and brain scientists worldwide. The mini-projects defined in MoBrain are geared toward facilitating this overall objective, each with specific objectives to reinforce existing services, develop new solutions and pave the path to global competence center and virtual research environment for translational research from molecular to brain. <br />
<br />
There are already many services and support/training mechanisms in place that will be further developed, optimized and merged during the operation of the CC, building onto and contributing to the EGI service offering. MoBrain will produce a working environment that will be better tailored to the end user needs than any of its individual components. It will provide an extended portfolio of tools and data in a user-friendly e-laboratory, with a direct relevance for neuroscience, starting from the quantification of the molecular forces, protein folding, biomolecular interactions, drug design and treatments, improved diagnostic and the full characterization of every pathological mechanism of brain diseases through both phenomenological as well as mechanistic approaches. <br />
<br />
<br />
<br />
== MoBrain partners ==<br />
<br />
*Utrecht University, Bijvoet Center for Biomolecular Research, the Netherlands <br />
*Consorzio Interuniversitario Risonanze Magnetiche Di Metalloproteine Paramagnetiche, Florence University, Italy. <br />
*Consejo Superior de Investigaciones Cientificas (and Spanish NGI) <br />
*Science and Technology Facility Council, UK <br />
*Provincia Lombardo Veneta Ordine Ospedaliero di SanGiovanni di Dio – Fatebenefratelli, Italy <br />
*GNUBILA, France <br />
*Istituto Nazionale Di Fisica Nucleare, Italy <br />
*SURFsara (Dutch NGI) <br />
*CESNET (Czech NGI) <br />
*Open Science Grid (US)<br />
<br />
The CC is open for additional members. Please email the CC coordinator to join. <br />
<br />
<br />
== Tasks ==<br />
<br />
<br />
=== T1: Cryo-EM in the cloud: bringing clouds to the data ===<br />
<br />
=== T2: GPU portals for biomolecular simulations ===<br />
<br />
=== T3: Integrating the micro- (WeNMR/INSTRUCT) and macroscopic (NeuGRID4you) virtual research communities ===<br />
<br />
=== T4: User support and training ===<br />
<br />
== Deliverables ==<br />
<br />
* D6.4 Fully integrated MoBrain web portal (OTHER), M12 (02.2016) - by T3<br />
* D6.7 Implementation and evaluation of AMBER and/or GROMACS (R), M13 (03.2016) - by T2<br />
* D6.12 GPGPU-enabled web portal(s) for MoBrain (OTHER), M16 (06.2016) - by T2&3<br />
* D6.14 Scipion cloud deployment for MoBrain (OTHER), M21 (11.2016) - by T1<br />
<br />
<br />
[[Category:EGI-Engage]]<br />
<br />
== Technical documentations ==<br />
<br />
=== How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016)===<br />
<br />
*A docker image with opencl + nvidia drivers + DisVis application and one with PowerFit application ready to run on GPU servers have been prepared for Ubuntu, with the goals of checking performance described [https://drive.google.com/file/d/0B44fOmGpGehGWVVTXzZmU3F1WlE/view here]. <br />
*In collaboration with [https://wiki.egi.eu/wiki/EGI-Engage:TASK_JRA2.4_Accelerated_Computing EGI-Engage task JRA2.4 (Accelerated Computing)] the DisVis and PowerFit docker images have been then re-build to work on the SL6 GPU servers which had a NVIDIA driver 319.x supporting only CUDA 5.5 version. These servers form the grid-enabled cluster at CIRMMP, and successful grid submissions to this cluster using the enmr.eu VO has been carried out with the expected performance the 1st of December 2015.<br />
*'''The 14th of July 2016 the CIRMMP servers were updated with the latest NVIDIA driver 352.93 supporting now CUDA 7.5 version. DisVis and PowerFit images used are now the ones maintained by the INDIGO-DataCloud project and distributed via dockerhub at [https://github.com/indigo-dc/ansible-role-disvis-powerfit indigodatacloudapps repository].''' <br />
*Here follows a description of the scripts and commands used to run the test:<br />
<pre>$ voms-proxy-init --voms enmr.eu <br />
$ glite-ce-job-submit -o jobid.txt -a -r cegpu.cerm.unifi.it:8443/cream-pbs-batch disvis.jdl</pre> <br />
where disvis.jdl is:&nbsp; <br />
<br />
[<br />
executable = "disvis.sh";<br />
inputSandbox = { "disvis.sh" ,"O14250.pdb" , "Q9UT97.pdb" , "restraints.dat" };<br />
stdoutput = "disvis.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "disvis.err";<br />
outputsandbox = { "disvis.out" , "disvis.err" , "res-gpu.tgz"};<br />
GPUNumber=1;<br />
]<br />
<br />
<br />
and disvis.sh is (assuming docker engine is installed on the grid WNs): <br />
<pre>#!/bin/sh<br />
driver=$(nvidia-smi | awk '/Driver Version/ {print $6}')<br />
echo hostname=$(hostname) <br />
echo user=$(id) <br />
export WDIR=`pwd` <br />
mkdir res-gpu<br />
echo docker run disvis... <br />
echo starttime=$(date)<br />
rnd=$RANDOM <br />
docker run --name=disvis-$rnd --device=/dev/nvidia0:/dev/nvidia0 --device=/dev/nvidia1:/dev/nvidia1 \<br />
--device=/dev/nvidiactl:/dev/nvidiactl --device=/dev/nvidia-uvm:/dev/nvidia-uvm \<br />
-v $WDIR:/home indigodatacloudapps/disvis:nvdrv_$driver /bin/sh \<br />
-c 'disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/res-gpu; \<br />
nvidia-smi --query-accounted-apps=timestamp,pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv' <br />
docker rm disvis-$rnd<br />
echo endtime=$(date) <br />
tar cfz res-gpu.tgz res-gpu</pre> <br />
<br />
'''Important update of the 19th September 2016''':<br />
<br />
If docker engine is not available on the grid WNs, you can use the INDIGO-DataCloud "udocker" tool. This has the advantage that docker containers are run in the user space, so the grid user does not obtain root privileges in the WN, avoiding this way any security concern. The disvis.sh file in this case is as below:<br />
<pre>#!/bin/sh<br />
driver=$(nvidia-smi | awk '/Driver Version/ {print $6}')<br />
echo hostname=$(hostname)<br />
echo user=$(id)<br />
export WDIR=`pwd`<br />
echo udocker run disvis...<br />
echo starttime=$(date)<br />
git clone https://github.com/indigo-dc/udocker<br />
cd udocker<br />
./udocker.py pull indigodatacloudapps/disvis:nvdrv_$driver<br />
echo time after pull = $(date)<br />
rnd=$RANDOM<br />
./udocker.py create --name=disvis-$rnd indigodatacloudapps/disvis:nvdrv_$driver<br />
echo time after udocker create = $(date)<br />
mkdir $WDIR/out<br />
./udocker.py run -v /dev --volume=$WDIR:/home disvis-$rnd "disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/out; \<br />
nvidia-smi --query-accounted-apps=timestamp,pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv"<br />
echo time after udocker run = $(date)<br />
./udocker.py rm disvis-$rnd<br />
./udocker.py rmi indigodatacloudapps/disvis:nvdrv_$driver<br />
cd $WDIR<br />
tar zcvf res-gpu.tgz out/<br />
echo endtime=$(date)</pre> <br />
<br />
The example input files specified in the jdl script above were taken from the GitHub repo for disvis available from: [https://github.com/haddocking/disvis https://github.com/haddocking/disvis] <br />
<br />
The performance on the GPGPU grid resources is what is expected for the card type. <br />
<br />
The timings were compared to an in-house GPU node at Utrecht University (GTX680 card), but can differ with the last update of the DisVis code: <br />
<pre>GPGPU-type Timing[minutes]<br />
GTX680 19<br />
M2090 (VM) 15.5<br />
1xK20 (VM) 13.5<br />
2xK20 11<br />
1xK20 11</pre> <br />
This indicates that DisVis, as expected, is currently incapable of using both the available GPGPUs. However, we plan to use this testbed to parallelize it on mulitple GPUs, which should be relatively straightforward. Values marked with (VM) refers to GPGPUs hosted in the FedCloud (see next section).<br />
<br />
For running PowerFit just replace disvis.jdl with a powerfit.jdl like e.g.:&nbsp;<br />
<br />
[<br />
executable = "powerfit.sh";<br />
inputSandbox = { "powerfit.sh" ,"1046.map" , "GroES_1gru.pdb" };<br />
stdoutput = "powerfit.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "powerfit.err";<br />
outputsandbox = { "powerfit.out" , "powerfit.err" , "res-gpu.tgz"};<br />
GPUNumber=1;<br />
]<br />
<br />
with powerfit.sh as [http://pastebin.com/uugj8ZQv here] and input data taken from [http://www.lip.pt/~david/powerfit-example.tgz here] (courtesy of Mario David)<br />
<br />
=== How to run the DisVis and PowerFit on VMs of the Federated Cloud using the enmr.eu VO (updated 14th of July 2016)===<br />
*Scripts are available to instantiate GPGPU-enabled VMs on IISAS (CentOS7 images with Tesla K20m GPUs) and CESNET (Ubuntu images with M2090 GPUs) FedCloud sites, installing latest NVIDIA drivers and DisVis and/or PowerFit software: [http://pastebin.com/5ueQRerr CESNET-create-VM.sh] and [http://pastebin.com/ARxadbn9 IISAS-create-VM.sh]<br />
* The scripts use the following user_data to contextualise the VMs in order to be ready to install the needed software: [http://pastebin.com/f48qMmsN user_data_ubuntu] and [http://pastebin.com/xCdhkTZr user_data_centos7]. Customise them by inserting your public ssh-key. <br />
*After having run the scripts above (voms-proxy-init -voms enmr.eu -r is required before executing them, from a host with an [https://wiki.egi.eu/wiki/Fedcloud-tf:CLI_Environment occi client] installed), you have to ssh in the VM with your ssh-key, then sudo su -, and then execute ./install-gpu-driver.sh. After that the VM is ready to install the application software,by executing ./install-disvis.sh and/or ./install-powerfit.sh. Scripts for running test samples are available in /home/run-disvisGPU.sh and /home/run-powerfitGPU.sh.</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Competence_centre_MoBrain&diff=89837
Competence centre MoBrain
2016-09-21T14:28:25Z
<p>Verlato: /* How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016) */</p>
<hr />
<div>{{Template:EGI-Engage_CC}} <br />
<br />
<br />
{{TOC_right}} <br />
<br />
[[Image:Mobrain.png|300px|right]]<br />
<br />
<br />
'''MoBrain: A Competence Center to Serve Translational Research from Molecule to Brain'''<br />
<br />
<br> <br />
<br> <br />
<br />
'''CC Coordinator:''' Alexandre M.J.J. Bonvin<br />
<br />
'''CC Coordinator deputy:''' Antonio Rosato<br />
<br />
'''CC members' list:''' cc-mobrain AT mailman.egi.eu <br />
<br />
'''CC meetings:''' https://indico.egi.eu/indico/categoryDisplay.py?categId=145 <br />
<br />
<br> <br />
<br />
<br />
== Introduction ==<br />
<br />
Today’s translational research era calls for innovative solutions to enable researchers and clinicians to build bridges between the microscopic (molecular) and macroscopic (human) scales. This requires building both transversal and vertical connections between techniques, researchers and clinicians to provide them with an optimal e-Science toolbox to tackle societal challenges related to health. <br />
<br />
The main objective of the MoBrain Competence Center (CC) is to lower barriers for scientists to access modern e-Science solutions from micro to macro scales. MoBrain builds on grid- and cloud-based infrastructures and on the existing expertise available within WeNMR (www.wenmr.eu), N4U (neugrid4you.eu), and technology providers (NGIs and other institutions, OSG). This initiative aims to serve its user communities, related ESFRI projects (e.g. INSTRUCT) and in the long term the Human Brain Project (FET Flagship), and strengthen the EGI services offering. <br />
<br />
By integrating molecular structural biology and medical imaging services and data, MoBrain will kick-start the development of a larger, integrated, global science virtual research environment for life and brain scientists worldwide. The mini-projects defined in MoBrain are geared toward facilitating this overall objective, each with specific objectives to reinforce existing services, develop new solutions and pave the path to global competence center and virtual research environment for translational research from molecular to brain. <br />
<br />
There are already many services and support/training mechanisms in place that will be further developed, optimized and merged during the operation of the CC, building onto and contributing to the EGI service offering. MoBrain will produce a working environment that will be better tailored to the end user needs than any of its individual components. It will provide an extended portfolio of tools and data in a user-friendly e-laboratory, with a direct relevance for neuroscience, starting from the quantification of the molecular forces, protein folding, biomolecular interactions, drug design and treatments, improved diagnostic and the full characterization of every pathological mechanism of brain diseases through both phenomenological as well as mechanistic approaches. <br />
<br />
<br />
<br />
== MoBrain partners ==<br />
<br />
*Utrecht University, Bijvoet Center for Biomolecular Research, the Netherlands <br />
*Consorzio Interuniversitario Risonanze Magnetiche Di Metalloproteine Paramagnetiche, Florence University, Italy. <br />
*Consejo Superior de Investigaciones Cientificas (and Spanish NGI) <br />
*Science and Technology Facility Council, UK <br />
*Provincia Lombardo Veneta Ordine Ospedaliero di SanGiovanni di Dio – Fatebenefratelli, Italy <br />
*GNUBILA, France <br />
*Istituto Nazionale Di Fisica Nucleare, Italy <br />
*SURFsara (Dutch NGI) <br />
*CESNET (Czech NGI) <br />
*Open Science Grid (US)<br />
<br />
The CC is open for additional members. Please email the CC coordinator to join. <br />
<br />
<br />
== Tasks ==<br />
<br />
<br />
=== T1: Cryo-EM in the cloud: bringing clouds to the data ===<br />
<br />
=== T2: GPU portals for biomolecular simulations ===<br />
<br />
=== T3: Integrating the micro- (WeNMR/INSTRUCT) and macroscopic (NeuGRID4you) virtual research communities ===<br />
<br />
=== T4: User support and training ===<br />
<br />
== Deliverables ==<br />
<br />
* D6.4 Fully integrated MoBrain web portal (OTHER), M12 (02.2016) - by T3<br />
* D6.7 Implementation and evaluation of AMBER and/or GROMACS (R), M13 (03.2016) - by T2<br />
* D6.12 GPGPU-enabled web portal(s) for MoBrain (OTHER), M16 (06.2016) - by T2&3<br />
* D6.14 Scipion cloud deployment for MoBrain (OTHER), M21 (11.2016) - by T1<br />
<br />
<br />
[[Category:EGI-Engage]]<br />
<br />
== Technical documentations ==<br />
<br />
=== How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016)===<br />
<br />
*A docker image with opencl + nvidia drivers + DisVis application and one with PowerFit application ready to run on GPU servers have been prepared for Ubuntu, with the goals of checking performance described [https://drive.google.com/file/d/0B44fOmGpGehGWVVTXzZmU3F1WlE/view here]. <br />
*In collaboration with [https://wiki.egi.eu/wiki/EGI-Engage:TASK_JRA2.4_Accelerated_Computing EGI-Engage task JRA2.4 (Accelerated Computing)] the DisVis and PowerFit docker images have been then re-build to work on the SL6 GPU servers which had a NVIDIA driver 319.x supporting only CUDA 5.5 version. These servers form the grid-enabled cluster at CIRMMP, and successful grid submissions to this cluster using the enmr.eu VO has been carried out with the expected performance the 1st of December 2015.<br />
*'''The 14th of July 2016 the CIRMMP servers were updated with the latest NVIDIA driver 352.93 supporting now CUDA 7.5 version. DisVis and PowerFit images used are now the ones maintained by the INDIGO-DataCloud project and distributed via dockerhub at [https://github.com/indigo-dc/ansible-role-disvis-powerfit indigodatacloudapps repository].''' <br />
*Here follows a description of the scripts and commands used to run the test:<br />
<pre>$ voms-proxy-init --voms enmr.eu <br />
$ glite-ce-job-submit -o jobid.txt -a -r cegpu.cerm.unifi.it:8443/cream-pbs-batch disvis.jdl</pre> <br />
where disvis.jdl is:&nbsp; <br />
<br />
[<br />
executable = "disvis.sh";<br />
inputSandbox = { "disvis.sh" ,"O14250.pdb" , "Q9UT97.pdb" , "restraints.dat" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=1;<br />
]<br />
<br />
<br />
and disvis.sh is (assuming docker engine is installed on the grid WNs): <br />
<pre>#!/bin/sh<br />
driver=$(nvidia-smi | awk '/Driver Version/ {print $6}')<br />
echo hostname=$(hostname) <br />
echo user=$(id) <br />
export WDIR=`pwd` <br />
mkdir res-gpu<br />
echo docker run disvis... <br />
echo starttime=$(date)<br />
rnd=$RANDOM <br />
docker run --name=disvis-$rnd --device=/dev/nvidia0:/dev/nvidia0 --device=/dev/nvidia1:/dev/nvidia1 \<br />
--device=/dev/nvidiactl:/dev/nvidiactl --device=/dev/nvidia-uvm:/dev/nvidia-uvm \<br />
-v $WDIR:/home indigodatacloudapps/disvis:nvdrv_$driver /bin/sh \<br />
-c 'disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/res-gpu; \<br />
nvidia-smi --query-accounted-apps=timestamp,pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv' <br />
docker rm disvis-$rnd<br />
echo endtime=$(date) <br />
tar cfz res-gpu.tgz res-gpu</pre> <br />
<br />
'''Important update of the 19th September 2016''':<br />
<br />
If docker engine is not available on the grid WNs, you can use the INDIGO-DataCloud "udocker" tool. This has the advantage that docker containers are run in the user space, so the grid user does not obtain root privileges in the WN, avoiding this way any security concern. The disvis.sh file in this case is as below:<br />
<pre>#!/bin/sh<br />
driver=$(nvidia-smi | awk '/Driver Version/ {print $6}')<br />
echo hostname=$(hostname)<br />
echo user=$(id)<br />
export WDIR=`pwd`<br />
echo udocker run disvis...<br />
echo starttime=$(date)<br />
git clone https://github.com/indigo-dc/udocker<br />
cd udocker<br />
./udocker.py pull indigodatacloudapps/disvis:nvdrv_$driver<br />
echo time after pull = $(date)<br />
rnd=$RANDOM<br />
./udocker.py create --name=disvis-$rnd indigodatacloudapps/disvis:nvdrv_$driver<br />
echo time after udocker create = $(date)<br />
mkdir $WDIR/out<br />
./udocker.py run -v /dev --volume=$WDIR:/home disvis-$rnd "disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/out; \<br />
nvidia-smi --query-accounted-apps=timestamp,pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv"<br />
echo time after udocker run = $(date)<br />
./udocker.py rm disvis-$rnd<br />
./udocker.py rmi indigodatacloudapps/disvis:nvdrv_$driver<br />
cd $WDIR<br />
tar zcvf res-gpu.tgz out/<br />
echo endtime=$(date)</pre> <br />
<br />
The example input files specified in the jdl script above were taken from the GitHub repo for disvis available from: [https://github.com/haddocking/disvis https://github.com/haddocking/disvis] <br />
<br />
The performance on the GPGPU grid resources is what is expected for the card type. <br />
<br />
The timings were compared to an in-house GPU node at Utrecht University (GTX680 card), but can differ with the last update of the DisVis code: <br />
<pre>GPGPU-type Timing[minutes]<br />
GTX680 19<br />
M2090 (VM) 15.5<br />
1xK20 (VM) 13.5<br />
2xK20 11<br />
1xK20 11</pre> <br />
This indicates that DisVis, as expected, is currently incapable of using both the available GPGPUs. However, we plan to use this testbed to parallelize it on mulitple GPUs, which should be relatively straightforward. Values marked with (VM) refers to GPGPUs hosted in the FedCloud (see next section).<br />
<br />
For running PowerFit just replace disvis.jdl with a powerfit.jdl like e.g.:&nbsp;<br />
<br />
[<br />
executable = "powerfit.sh";<br />
inputSandbox = { "powerfit.sh" ,"1046.map" , "GroES_1gru.pdb" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=1;<br />
]<br />
<br />
with powerfit.sh as [http://pastebin.com/uugj8ZQv here] and input data taken from [http://www.lip.pt/~david/powerfit-example.tgz here] (courtesy of Mario David)<br />
<br />
=== How to run the DisVis and PowerFit on VMs of the Federated Cloud using the enmr.eu VO (updated 14th of July 2016)===<br />
*Scripts are available to instantiate GPGPU-enabled VMs on IISAS (CentOS7 images with Tesla K20m GPUs) and CESNET (Ubuntu images with M2090 GPUs) FedCloud sites, installing latest NVIDIA drivers and DisVis and/or PowerFit software: [http://pastebin.com/5ueQRerr CESNET-create-VM.sh] and [http://pastebin.com/ARxadbn9 IISAS-create-VM.sh]<br />
* The scripts use the following user_data to contextualise the VMs in order to be ready to install the needed software: [http://pastebin.com/f48qMmsN user_data_ubuntu] and [http://pastebin.com/xCdhkTZr user_data_centos7]. Customise them by inserting your public ssh-key. <br />
*After having run the scripts above (voms-proxy-init -voms enmr.eu -r is required before executing them, from a host with an [https://wiki.egi.eu/wiki/Fedcloud-tf:CLI_Environment occi client] installed), you have to ssh in the VM with your ssh-key, then sudo su -, and then execute ./install-gpu-driver.sh. After that the VM is ready to install the application software,by executing ./install-disvis.sh and/or ./install-powerfit.sh. Scripts for running test samples are available in /home/run-disvisGPU.sh and /home/run-powerfitGPU.sh.</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Competence_centre_MoBrain&diff=89836
Competence centre MoBrain
2016-09-21T14:27:58Z
<p>Verlato: /* How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016) */</p>
<hr />
<div>{{Template:EGI-Engage_CC}} <br />
<br />
<br />
{{TOC_right}} <br />
<br />
[[Image:Mobrain.png|300px|right]]<br />
<br />
<br />
'''MoBrain: A Competence Center to Serve Translational Research from Molecule to Brain'''<br />
<br />
<br> <br />
<br> <br />
<br />
'''CC Coordinator:''' Alexandre M.J.J. Bonvin<br />
<br />
'''CC Coordinator deputy:''' Antonio Rosato<br />
<br />
'''CC members' list:''' cc-mobrain AT mailman.egi.eu <br />
<br />
'''CC meetings:''' https://indico.egi.eu/indico/categoryDisplay.py?categId=145 <br />
<br />
<br> <br />
<br />
<br />
== Introduction ==<br />
<br />
Today’s translational research era calls for innovative solutions to enable researchers and clinicians to build bridges between the microscopic (molecular) and macroscopic (human) scales. This requires building both transversal and vertical connections between techniques, researchers and clinicians to provide them with an optimal e-Science toolbox to tackle societal challenges related to health. <br />
<br />
The main objective of the MoBrain Competence Center (CC) is to lower barriers for scientists to access modern e-Science solutions from micro to macro scales. MoBrain builds on grid- and cloud-based infrastructures and on the existing expertise available within WeNMR (www.wenmr.eu), N4U (neugrid4you.eu), and technology providers (NGIs and other institutions, OSG). This initiative aims to serve its user communities, related ESFRI projects (e.g. INSTRUCT) and in the long term the Human Brain Project (FET Flagship), and strengthen the EGI services offering. <br />
<br />
By integrating molecular structural biology and medical imaging services and data, MoBrain will kick-start the development of a larger, integrated, global science virtual research environment for life and brain scientists worldwide. The mini-projects defined in MoBrain are geared toward facilitating this overall objective, each with specific objectives to reinforce existing services, develop new solutions and pave the path to global competence center and virtual research environment for translational research from molecular to brain. <br />
<br />
There are already many services and support/training mechanisms in place that will be further developed, optimized and merged during the operation of the CC, building onto and contributing to the EGI service offering. MoBrain will produce a working environment that will be better tailored to the end user needs than any of its individual components. It will provide an extended portfolio of tools and data in a user-friendly e-laboratory, with a direct relevance for neuroscience, starting from the quantification of the molecular forces, protein folding, biomolecular interactions, drug design and treatments, improved diagnostic and the full characterization of every pathological mechanism of brain diseases through both phenomenological as well as mechanistic approaches. <br />
<br />
<br />
<br />
== MoBrain partners ==<br />
<br />
*Utrecht University, Bijvoet Center for Biomolecular Research, the Netherlands <br />
*Consorzio Interuniversitario Risonanze Magnetiche Di Metalloproteine Paramagnetiche, Florence University, Italy. <br />
*Consejo Superior de Investigaciones Cientificas (and Spanish NGI) <br />
*Science and Technology Facility Council, UK <br />
*Provincia Lombardo Veneta Ordine Ospedaliero di SanGiovanni di Dio – Fatebenefratelli, Italy <br />
*GNUBILA, France <br />
*Istituto Nazionale Di Fisica Nucleare, Italy <br />
*SURFsara (Dutch NGI) <br />
*CESNET (Czech NGI) <br />
*Open Science Grid (US)<br />
<br />
The CC is open for additional members. Please email the CC coordinator to join. <br />
<br />
<br />
== Tasks ==<br />
<br />
<br />
=== T1: Cryo-EM in the cloud: bringing clouds to the data ===<br />
<br />
=== T2: GPU portals for biomolecular simulations ===<br />
<br />
=== T3: Integrating the micro- (WeNMR/INSTRUCT) and macroscopic (NeuGRID4you) virtual research communities ===<br />
<br />
=== T4: User support and training ===<br />
<br />
== Deliverables ==<br />
<br />
* D6.4 Fully integrated MoBrain web portal (OTHER), M12 (02.2016) - by T3<br />
* D6.7 Implementation and evaluation of AMBER and/or GROMACS (R), M13 (03.2016) - by T2<br />
* D6.12 GPGPU-enabled web portal(s) for MoBrain (OTHER), M16 (06.2016) - by T2&3<br />
* D6.14 Scipion cloud deployment for MoBrain (OTHER), M21 (11.2016) - by T1<br />
<br />
<br />
[[Category:EGI-Engage]]<br />
<br />
== Technical documentations ==<br />
<br />
=== How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016)===<br />
<br />
*A docker image with opencl + nvidia drivers + DisVis application and one with PowerFit application ready to run on GPU servers have been prepared for Ubuntu, with the goals of checking performance described [https://drive.google.com/file/d/0B44fOmGpGehGWVVTXzZmU3F1WlE/view here]. <br />
*In collaboration with [https://wiki.egi.eu/wiki/EGI-Engage:TASK_JRA2.4_Accelerated_Computing EGI-Engage task JRA2.4 (Accelerated Computing)] the DisVis and PowerFit docker images have been then re-build to work on the SL6 GPU servers which had a NVIDIA driver 319.x supporting only CUDA 5.5 version. These servers form the grid-enabled cluster at CIRMMP, and successful grid submissions to this cluster using the enmr.eu VO has been carried out with the expected performance the 1st of December 2015.<br />
*'''The 14th of July 2016 the CIRMMP servers were updated with the latest NVIDIA driver 352.93 supporting now CUDA 7.5 version. DisVis and PowerFit images used are now the ones maintained by the INDIGO-DataCloud project and distributed via dockerhub at [https://github.com/indigo-dc/ansible-role-disvis-powerfit indigodatacloudapps repository].''' <br />
*Here follows a description of the scripts and commands used to run the test:<br />
<pre>$ voms-proxy-init --voms enmr.eu <br />
$ glite-ce-job-submit -o jobid.txt -a -r cegpu.cerm.unifi.it:8443/cream-pbs-batch disvis.jdl</pre> <br />
where disvis.jdl is:&nbsp; <br />
<br />
[<br />
executable = "disvis.sh";<br />
inputSandbox = { "disvis.sh" ,"O14250.pdb" , "Q9UT97.pdb" , "restraints.dat" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=1;<br />
]<br />
<br />
<br />
and disvis.sh is (assuming docker engine is installed on the grid WNs): <br />
<pre>#!/bin/sh<br />
driver=$(nvidia-smi | awk '/Driver Version/ {print $6}')<br />
echo hostname=$(hostname) <br />
echo user=$(id) <br />
export WDIR=`pwd` <br />
mkdir res-gpu<br />
echo docker run disvis... <br />
echo starttime=$(date)<br />
rnd=$RANDOM <br />
docker run --name=disvis-$rnd --device=/dev/nvidia0:/dev/nvidia0 --device=/dev/nvidia1:/dev/nvidia1 \<br />
--device=/dev/nvidiactl:/dev/nvidiactl --device=/dev/nvidia-uvm:/dev/nvidia-uvm \<br />
-v $WDIR:/home indigodatacloudapps/disvis:nvdrv_$driver /bin/sh \<br />
-c 'disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/res-gpu; \<br />
nvidia-smi --query-accounted-apps=timestamp,pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv' <br />
docker rm disvis-$rnd<br />
echo endtime=$(date) <br />
tar cfz res-gpu.tgz res-gpu</pre> <br />
<br />
'''Important updated of the 19th September 2016''':<br />
<br />
If docker engine is not available on the grid WNs, you can use the INDIGO-DataCloud "udocker" tool. This has the advantage that docker containers are run in the user space, so the grid user does not obtain root privileges in the WN, avoiding this way any security concern. The disvis.sh file in this case is as below:<br />
<pre>#!/bin/sh<br />
driver=$(nvidia-smi | awk '/Driver Version/ {print $6}')<br />
echo hostname=$(hostname)<br />
echo user=$(id)<br />
export WDIR=`pwd`<br />
echo udocker run disvis...<br />
echo starttime=$(date)<br />
git clone https://github.com/indigo-dc/udocker<br />
cd udocker<br />
./udocker.py pull indigodatacloudapps/disvis:nvdrv_$driver<br />
echo time after pull = $(date)<br />
rnd=$RANDOM<br />
./udocker.py create --name=disvis-$rnd indigodatacloudapps/disvis:nvdrv_$driver<br />
echo time after udocker create = $(date)<br />
mkdir $WDIR/out<br />
./udocker.py run -v /dev --volume=$WDIR:/home disvis-$rnd "disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/out; \<br />
nvidia-smi --query-accounted-apps=timestamp,pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv"<br />
echo time after udocker run = $(date)<br />
./udocker.py rm disvis-$rnd<br />
./udocker.py rmi indigodatacloudapps/disvis:nvdrv_$driver<br />
cd $WDIR<br />
tar zcvf res-gpu.tgz out/<br />
echo endtime=$(date)</pre> <br />
<br />
The example input files specified in the jdl script above were taken from the GitHub repo for disvis available from: [https://github.com/haddocking/disvis https://github.com/haddocking/disvis] <br />
<br />
The performance on the GPGPU grid resources is what is expected for the card type. <br />
<br />
The timings were compared to an in-house GPU node at Utrecht University (GTX680 card), but can differ with the last update of the DisVis code: <br />
<pre>GPGPU-type Timing[minutes]<br />
GTX680 19<br />
M2090 (VM) 15.5<br />
1xK20 (VM) 13.5<br />
2xK20 11<br />
1xK20 11</pre> <br />
This indicates that DisVis, as expected, is currently incapable of using both the available GPGPUs. However, we plan to use this testbed to parallelize it on mulitple GPUs, which should be relatively straightforward. Values marked with (VM) refers to GPGPUs hosted in the FedCloud (see next section).<br />
<br />
For running PowerFit just replace disvis.jdl with a powerfit.jdl like e.g.:&nbsp;<br />
<br />
[<br />
executable = "powerfit.sh";<br />
inputSandbox = { "powerfit.sh" ,"1046.map" , "GroES_1gru.pdb" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=1;<br />
]<br />
<br />
with powerfit.sh as [http://pastebin.com/uugj8ZQv here] and input data taken from [http://www.lip.pt/~david/powerfit-example.tgz here] (courtesy of Mario David)<br />
<br />
=== How to run the DisVis and PowerFit on VMs of the Federated Cloud using the enmr.eu VO (updated 14th of July 2016)===<br />
*Scripts are available to instantiate GPGPU-enabled VMs on IISAS (CentOS7 images with Tesla K20m GPUs) and CESNET (Ubuntu images with M2090 GPUs) FedCloud sites, installing latest NVIDIA drivers and DisVis and/or PowerFit software: [http://pastebin.com/5ueQRerr CESNET-create-VM.sh] and [http://pastebin.com/ARxadbn9 IISAS-create-VM.sh]<br />
* The scripts use the following user_data to contextualise the VMs in order to be ready to install the needed software: [http://pastebin.com/f48qMmsN user_data_ubuntu] and [http://pastebin.com/xCdhkTZr user_data_centos7]. Customise them by inserting your public ssh-key. <br />
*After having run the scripts above (voms-proxy-init -voms enmr.eu -r is required before executing them, from a host with an [https://wiki.egi.eu/wiki/Fedcloud-tf:CLI_Environment occi client] installed), you have to ssh in the VM with your ssh-key, then sudo su -, and then execute ./install-gpu-driver.sh. After that the VM is ready to install the application software,by executing ./install-disvis.sh and/or ./install-powerfit.sh. Scripts for running test samples are available in /home/run-disvisGPU.sh and /home/run-powerfitGPU.sh.</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Competence_centre_MoBrain&diff=89835
Competence centre MoBrain
2016-09-21T14:26:26Z
<p>Verlato: /* How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016) */</p>
<hr />
<div>{{Template:EGI-Engage_CC}} <br />
<br />
<br />
{{TOC_right}} <br />
<br />
[[Image:Mobrain.png|300px|right]]<br />
<br />
<br />
'''MoBrain: A Competence Center to Serve Translational Research from Molecule to Brain'''<br />
<br />
<br> <br />
<br> <br />
<br />
'''CC Coordinator:''' Alexandre M.J.J. Bonvin<br />
<br />
'''CC Coordinator deputy:''' Antonio Rosato<br />
<br />
'''CC members' list:''' cc-mobrain AT mailman.egi.eu <br />
<br />
'''CC meetings:''' https://indico.egi.eu/indico/categoryDisplay.py?categId=145 <br />
<br />
<br> <br />
<br />
<br />
== Introduction ==<br />
<br />
Today’s translational research era calls for innovative solutions to enable researchers and clinicians to build bridges between the microscopic (molecular) and macroscopic (human) scales. This requires building both transversal and vertical connections between techniques, researchers and clinicians to provide them with an optimal e-Science toolbox to tackle societal challenges related to health. <br />
<br />
The main objective of the MoBrain Competence Center (CC) is to lower barriers for scientists to access modern e-Science solutions from micro to macro scales. MoBrain builds on grid- and cloud-based infrastructures and on the existing expertise available within WeNMR (www.wenmr.eu), N4U (neugrid4you.eu), and technology providers (NGIs and other institutions, OSG). This initiative aims to serve its user communities, related ESFRI projects (e.g. INSTRUCT) and in the long term the Human Brain Project (FET Flagship), and strengthen the EGI services offering. <br />
<br />
By integrating molecular structural biology and medical imaging services and data, MoBrain will kick-start the development of a larger, integrated, global science virtual research environment for life and brain scientists worldwide. The mini-projects defined in MoBrain are geared toward facilitating this overall objective, each with specific objectives to reinforce existing services, develop new solutions and pave the path to global competence center and virtual research environment for translational research from molecular to brain. <br />
<br />
There are already many services and support/training mechanisms in place that will be further developed, optimized and merged during the operation of the CC, building onto and contributing to the EGI service offering. MoBrain will produce a working environment that will be better tailored to the end user needs than any of its individual components. It will provide an extended portfolio of tools and data in a user-friendly e-laboratory, with a direct relevance for neuroscience, starting from the quantification of the molecular forces, protein folding, biomolecular interactions, drug design and treatments, improved diagnostic and the full characterization of every pathological mechanism of brain diseases through both phenomenological as well as mechanistic approaches. <br />
<br />
<br />
<br />
== MoBrain partners ==<br />
<br />
*Utrecht University, Bijvoet Center for Biomolecular Research, the Netherlands <br />
*Consorzio Interuniversitario Risonanze Magnetiche Di Metalloproteine Paramagnetiche, Florence University, Italy. <br />
*Consejo Superior de Investigaciones Cientificas (and Spanish NGI) <br />
*Science and Technology Facility Council, UK <br />
*Provincia Lombardo Veneta Ordine Ospedaliero di SanGiovanni di Dio – Fatebenefratelli, Italy <br />
*GNUBILA, France <br />
*Istituto Nazionale Di Fisica Nucleare, Italy <br />
*SURFsara (Dutch NGI) <br />
*CESNET (Czech NGI) <br />
*Open Science Grid (US)<br />
<br />
The CC is open for additional members. Please email the CC coordinator to join. <br />
<br />
<br />
== Tasks ==<br />
<br />
<br />
=== T1: Cryo-EM in the cloud: bringing clouds to the data ===<br />
<br />
=== T2: GPU portals for biomolecular simulations ===<br />
<br />
=== T3: Integrating the micro- (WeNMR/INSTRUCT) and macroscopic (NeuGRID4you) virtual research communities ===<br />
<br />
=== T4: User support and training ===<br />
<br />
== Deliverables ==<br />
<br />
* D6.4 Fully integrated MoBrain web portal (OTHER), M12 (02.2016) - by T3<br />
* D6.7 Implementation and evaluation of AMBER and/or GROMACS (R), M13 (03.2016) - by T2<br />
* D6.12 GPGPU-enabled web portal(s) for MoBrain (OTHER), M16 (06.2016) - by T2&3<br />
* D6.14 Scipion cloud deployment for MoBrain (OTHER), M21 (11.2016) - by T1<br />
<br />
<br />
[[Category:EGI-Engage]]<br />
<br />
== Technical documentations ==<br />
<br />
=== How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016)===<br />
<br />
*A docker image with opencl + nvidia drivers + DisVis application and one with PowerFit application ready to run on GPU servers have been prepared for Ubuntu, with the goals of checking performance described [https://drive.google.com/file/d/0B44fOmGpGehGWVVTXzZmU3F1WlE/view here]. <br />
*In collaboration with [https://wiki.egi.eu/wiki/EGI-Engage:TASK_JRA2.4_Accelerated_Computing EGI-Engage task JRA2.4 (Accelerated Computing)] the DisVis and PowerFit docker images have been then re-build to work on the SL6 GPU servers which had a NVIDIA driver 319.x supporting only CUDA 5.5 version. These servers form the grid-enabled cluster at CIRMMP, and successful grid submissions to this cluster using the enmr.eu VO has been carried out with the expected performance the 1st of December 2015.<br />
*'''The 14th of July 2016 the CIRMMP servers were updated with the latest NVIDIA driver 352.93 supporting now CUDA 7.5 version. DisVis and PowerFit images used are now the ones maintained by the INDIGO-DataCloud project and distributed via dockerhub at [https://github.com/indigo-dc/ansible-role-disvis-powerfit indigodatacloudapps repository].''' <br />
*Here follows a description of the scripts and commands used to run the test:<br />
<pre>$ voms-proxy-init --voms enmr.eu <br />
$ glite-ce-job-submit -o jobid.txt -a -r cegpu.cerm.unifi.it:8443/cream-pbs-batch disvis.jdl</pre> <br />
where disvis.jdl is:&nbsp; <br />
<br />
[<br />
executable = "disvis.sh";<br />
inputSandbox = { "disvis.sh" ,"O14250.pdb" , "Q9UT97.pdb" , "restraints.dat" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=1;<br />
]<br />
<br />
<br />
and disvis.sh is (assuming docker engine is installed on the grid WNs): <br />
<pre>#!/bin/sh<br />
driver=$(nvidia-smi | awk '/Driver Version/ {print $6}')<br />
echo hostname=$(hostname) <br />
echo user=$(id) <br />
export WDIR=`pwd` <br />
mkdir res-gpu<br />
echo docker run disvis... <br />
echo starttime=$(date)<br />
rnd=$RANDOM <br />
docker run --name=disvis-$rnd --device=/dev/nvidia0:/dev/nvidia0 --device=/dev/nvidia1:/dev/nvidia1 \<br />
--device=/dev/nvidiactl:/dev/nvidiactl --device=/dev/nvidia-uvm:/dev/nvidia-uvm \<br />
-v $WDIR:/home indigodatacloudapps/disvis:nvdrv_$driver /bin/sh \<br />
-c 'disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/res-gpu; \<br />
nvidia-smi --query-accounted-apps=timestamp,pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv' <br />
docker rm disvis-$rnd<br />
echo endtime=$(date) <br />
tar cfz res-gpu.tgz res-gpu</pre> <br />
<br />
'''Updated the 19th of September 2016''':<br />
<br />
If docker engine is not available on the grid WNs, you can use "udocker" tool. This has the advantage that docker containers are run in the user space, so the grid user does not obtain root privileges in the WN, avoiding this way any security concern. The disvis.sh file in this case is as below:<br />
<pre>#!/bin/sh<br />
driver=$(nvidia-smi | awk '/Driver Version/ {print $6}')<br />
echo hostname=$(hostname)<br />
echo user=$(id)<br />
export WDIR=`pwd`<br />
echo udocker run disvis...<br />
echo starttime=$(date)<br />
git clone https://github.com/indigo-dc/udocker<br />
cd udocker<br />
./udocker.py pull indigodatacloudapps/disvis:nvdrv_$driver<br />
echo time after pull = $(date)<br />
rnd=$RANDOM<br />
./udocker.py create --name=disvis-$rnd indigodatacloudapps/disvis:nvdrv_$driver<br />
echo time after udocker create = $(date)<br />
mkdir $WDIR/out<br />
./udocker.py run -v /dev --volume=$WDIR:/home disvis-$rnd "disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/out; \<br />
nvidia-smi --query-accounted-apps=timestamp,pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv"<br />
echo time after udocker run = $(date)<br />
./udocker.py rm disvis-$rnd<br />
./udocker.py rmi indigodatacloudapps/disvis:nvdrv_$driver<br />
cd $WDIR<br />
tar zcvf res-gpu.tgz out/<br />
echo endtime=$(date)</pre> <br />
<br />
The example input files specified in the jdl script above were taken from the GitHub repo for disvis available from: [https://github.com/haddocking/disvis https://github.com/haddocking/disvis] <br />
<br />
The performance on the GPGPU grid resources is what is expected for the card type. <br />
<br />
The timings were compared to an in-house GPU node at Utrecht University (GTX680 card), but can differ with the last update of the DisVis code: <br />
<pre>GPGPU-type Timing[minutes]<br />
GTX680 19<br />
M2090 (VM) 15.5<br />
1xK20 (VM) 13.5<br />
2xK20 11<br />
1xK20 11</pre> <br />
This indicates that DisVis, as expected, is currently incapable of using both the available GPGPUs. However, we plan to use this testbed to parallelize it on mulitple GPUs, which should be relatively straightforward. Values marked with (VM) refers to GPGPUs hosted in the FedCloud (see next section).<br />
<br />
For running PowerFit just replace disvis.jdl with a powerfit.jdl like e.g.:&nbsp;<br />
<br />
[<br />
executable = "powerfit.sh";<br />
inputSandbox = { "powerfit.sh" ,"1046.map" , "GroES_1gru.pdb" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=1;<br />
]<br />
<br />
with powerfit.sh as [http://pastebin.com/uugj8ZQv here] and input data taken from [http://www.lip.pt/~david/powerfit-example.tgz here] (courtesy of Mario David)<br />
<br />
=== How to run the DisVis and PowerFit on VMs of the Federated Cloud using the enmr.eu VO (updated 14th of July 2016)===<br />
*Scripts are available to instantiate GPGPU-enabled VMs on IISAS (CentOS7 images with Tesla K20m GPUs) and CESNET (Ubuntu images with M2090 GPUs) FedCloud sites, installing latest NVIDIA drivers and DisVis and/or PowerFit software: [http://pastebin.com/5ueQRerr CESNET-create-VM.sh] and [http://pastebin.com/ARxadbn9 IISAS-create-VM.sh]<br />
* The scripts use the following user_data to contextualise the VMs in order to be ready to install the needed software: [http://pastebin.com/f48qMmsN user_data_ubuntu] and [http://pastebin.com/xCdhkTZr user_data_centos7]. Customise them by inserting your public ssh-key. <br />
*After having run the scripts above (voms-proxy-init -voms enmr.eu -r is required before executing them, from a host with an [https://wiki.egi.eu/wiki/Fedcloud-tf:CLI_Environment occi client] installed), you have to ssh in the VM with your ssh-key, then sudo su -, and then execute ./install-gpu-driver.sh. After that the VM is ready to install the application software,by executing ./install-disvis.sh and/or ./install-powerfit.sh. Scripts for running test samples are available in /home/run-disvisGPU.sh and /home/run-powerfitGPU.sh.</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Competence_centre_MoBrain&diff=89784
Competence centre MoBrain
2016-09-20T13:47:04Z
<p>Verlato: /* How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016) */</p>
<hr />
<div>{{Template:EGI-Engage_CC}} <br />
<br />
<br />
{{TOC_right}} <br />
<br />
[[Image:Mobrain.png|300px|right]]<br />
<br />
<br />
'''MoBrain: A Competence Center to Serve Translational Research from Molecule to Brain'''<br />
<br />
<br> <br />
<br> <br />
<br />
'''CC Coordinator:''' Alexandre M.J.J. Bonvin<br />
<br />
'''CC Coordinator deputy:''' Antonio Rosato<br />
<br />
'''CC members' list:''' cc-mobrain AT mailman.egi.eu <br />
<br />
'''CC meetings:''' https://indico.egi.eu/indico/categoryDisplay.py?categId=145 <br />
<br />
<br> <br />
<br />
<br />
== Introduction ==<br />
<br />
Today’s translational research era calls for innovative solutions to enable researchers and clinicians to build bridges between the microscopic (molecular) and macroscopic (human) scales. This requires building both transversal and vertical connections between techniques, researchers and clinicians to provide them with an optimal e-Science toolbox to tackle societal challenges related to health. <br />
<br />
The main objective of the MoBrain Competence Center (CC) is to lower barriers for scientists to access modern e-Science solutions from micro to macro scales. MoBrain builds on grid- and cloud-based infrastructures and on the existing expertise available within WeNMR (www.wenmr.eu), N4U (neugrid4you.eu), and technology providers (NGIs and other institutions, OSG). This initiative aims to serve its user communities, related ESFRI projects (e.g. INSTRUCT) and in the long term the Human Brain Project (FET Flagship), and strengthen the EGI services offering. <br />
<br />
By integrating molecular structural biology and medical imaging services and data, MoBrain will kick-start the development of a larger, integrated, global science virtual research environment for life and brain scientists worldwide. The mini-projects defined in MoBrain are geared toward facilitating this overall objective, each with specific objectives to reinforce existing services, develop new solutions and pave the path to global competence center and virtual research environment for translational research from molecular to brain. <br />
<br />
There are already many services and support/training mechanisms in place that will be further developed, optimized and merged during the operation of the CC, building onto and contributing to the EGI service offering. MoBrain will produce a working environment that will be better tailored to the end user needs than any of its individual components. It will provide an extended portfolio of tools and data in a user-friendly e-laboratory, with a direct relevance for neuroscience, starting from the quantification of the molecular forces, protein folding, biomolecular interactions, drug design and treatments, improved diagnostic and the full characterization of every pathological mechanism of brain diseases through both phenomenological as well as mechanistic approaches. <br />
<br />
<br />
<br />
== MoBrain partners ==<br />
<br />
*Utrecht University, Bijvoet Center for Biomolecular Research, the Netherlands <br />
*Consorzio Interuniversitario Risonanze Magnetiche Di Metalloproteine Paramagnetiche, Florence University, Italy. <br />
*Consejo Superior de Investigaciones Cientificas (and Spanish NGI) <br />
*Science and Technology Facility Council, UK <br />
*Provincia Lombardo Veneta Ordine Ospedaliero di SanGiovanni di Dio – Fatebenefratelli, Italy <br />
*GNUBILA, France <br />
*Istituto Nazionale Di Fisica Nucleare, Italy <br />
*SURFsara (Dutch NGI) <br />
*CESNET (Czech NGI) <br />
*Open Science Grid (US)<br />
<br />
The CC is open for additional members. Please email the CC coordinator to join. <br />
<br />
<br />
== Tasks ==<br />
<br />
<br />
=== T1: Cryo-EM in the cloud: bringing clouds to the data ===<br />
<br />
=== T2: GPU portals for biomolecular simulations ===<br />
<br />
=== T3: Integrating the micro- (WeNMR/INSTRUCT) and macroscopic (NeuGRID4you) virtual research communities ===<br />
<br />
=== T4: User support and training ===<br />
<br />
== Deliverables ==<br />
<br />
* D6.4 Fully integrated MoBrain web portal (OTHER), M12 (02.2016) - by T3<br />
* D6.7 Implementation and evaluation of AMBER and/or GROMACS (R), M13 (03.2016) - by T2<br />
* D6.12 GPGPU-enabled web portal(s) for MoBrain (OTHER), M16 (06.2016) - by T2&3<br />
* D6.14 Scipion cloud deployment for MoBrain (OTHER), M21 (11.2016) - by T1<br />
<br />
<br />
[[Category:EGI-Engage]]<br />
<br />
== Technical documentations ==<br />
<br />
=== How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016)===<br />
<br />
*A docker image with opencl + nvidia drivers + DisVis application and one with PowerFit application ready to run on GPU servers have been prepared for Ubuntu, with the goals of checking performance described [https://drive.google.com/file/d/0B44fOmGpGehGWVVTXzZmU3F1WlE/view here]. <br />
*In collaboration with [https://wiki.egi.eu/wiki/EGI-Engage:TASK_JRA2.4_Accelerated_Computing EGI-Engage task JRA2.4 (Accelerated Computing)] the DisVis and PowerFit docker images have been then re-build to work on the SL6 GPU servers which had a NVIDIA driver 319.x supporting only CUDA 5.5 version. These servers form the grid-enabled cluster at CIRMMP, and successful grid submissions to this cluster using the enmr.eu VO has been carried out with the expected performance the 1st of December 2015.<br />
*'''The 14th of July 2016 the CIRMMP servers were updated with the latest NVIDIA driver 352.93 supporting now CUDA 7.5 version. DisVis and PowerFit images used are now the ones maintained by the INDIGO-DataCloud project and distributed via dockerhub at [https://github.com/indigo-dc/ansible-role-disvis-powerfit indigodatacloudapps repository].''' <br />
*Here follows a description of the scripts and commands used to run the test:<br />
<pre>$ voms-proxy-init --voms enmr.eu <br />
$ glite-ce-job-submit -o jobid.txt -a -r cegpu.cerm.unifi.it:8443/cream-pbs-batch disvis.jdl</pre> <br />
where disvis.jdl is:&nbsp; <br />
<br />
[<br />
executable = "disvis.sh";<br />
inputSandbox = { "disvis.sh" ,"O14250.pdb" , "Q9UT97.pdb" , "restraints.dat" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=1;<br />
]<br />
<br />
<br />
and disvis.sh is (assuming docker engine is installed on the grid WNs): <br />
<pre>#!/bin/sh<br />
driver=$(nvidia-smi | awk '/Driver Version/ {print $6}')<br />
echo hostname=$(hostname) <br />
echo user=$(id) <br />
export WDIR=`pwd` <br />
mkdir res-gpu<br />
echo docker run disvis... <br />
echo starttime=$(date)<br />
rnd=$RANDOM <br />
docker run --name=disvis-$rnd --device=/dev/nvidia0:/dev/nvidia0 --device=/dev/nvidia1:/dev/nvidia1 \<br />
--device=/dev/nvidiactl:/dev/nvidiactl --device=/dev/nvidia-uvm:/dev/nvidia-uvm \<br />
-v $WDIR:/home indigodatacloudapps/disvis:nvdrv_$driver /bin/sh \<br />
-c 'disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/res-gpu; \<br />
nvidia-smi --query-accounted-apps=timestamp,pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv' <br />
docker rm disvis-$rnd<br />
echo endtime=$(date) <br />
tar cfz res-gpu.tgz res-gpu</pre> <br />
<br />
'''Updated the 19th of September 2016''':<br />
<br />
If docker engine is not available on the grid WNs, you can use udocker in disvis.sh:<br />
<pre>#!/bin/sh<br />
driver=$(nvidia-smi | awk '/Driver Version/ {print $6}')<br />
echo hostname=$(hostname)<br />
echo user=$(id)<br />
export WDIR=`pwd`<br />
echo udocker run disvis...<br />
echo starttime=$(date)<br />
git clone https://github.com/indigo-dc/udocker<br />
cd udocker<br />
./udocker.py pull indigodatacloudapps/disvis:nvdrv_$driver<br />
echo time after pull = $(date)<br />
rnd=$RANDOM<br />
./udocker.py create --name=disvis-$rnd indigodatacloudapps/disvis:nvdrv_$driver<br />
echo time after udocker create = $(date)<br />
mkdir $WDIR/out<br />
./udocker.py run -v /dev --volume=$WDIR:/home disvis-$rnd "disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/out; \<br />
nvidia-smi --query-accounted-apps=timestamp,pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv"<br />
echo time after udocker run = $(date)<br />
./udocker.py rm disvis-$rnd<br />
./udocker.py rmi indigodatacloudapps/disvis:nvdrv_$driver<br />
cd $WDIR<br />
tar zcvf res-gpu.tgz out/<br />
echo endtime=$(date)</pre> <br />
<br />
The example input files specified in the jdl script above were taken from the GitHub repo for disvis available from: [https://github.com/haddocking/disvis https://github.com/haddocking/disvis] <br />
<br />
The performance on the GPGPU grid resources is what is expected for the card type. <br />
<br />
The timings were compared to an in-house GPU node at Utrecht University (GTX680 card), but can differ with the last update of the DisVis code: <br />
<pre>GPGPU-type Timing[minutes]<br />
GTX680 19<br />
M2090 (VM) 15.5<br />
1xK20 (VM) 13.5<br />
2xK20 11<br />
1xK20 11</pre> <br />
This indicates that DisVis, as expected, is currently incapable of using both the available GPGPUs. However, we plan to use this testbed to parallelize it on mulitple GPUs, which should be relatively straightforward. Values marked with (VM) refers to GPGPUs hosted in the FedCloud (see next section).<br />
<br />
For running PowerFit just replace disvis.jdl with a powerfit.jdl like e.g.:&nbsp;<br />
<br />
[<br />
executable = "powerfit.sh";<br />
inputSandbox = { "powerfit.sh" ,"1046.map" , "GroES_1gru.pdb" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=1;<br />
]<br />
<br />
with powerfit.sh as [http://pastebin.com/uugj8ZQv here] and input data taken from [http://www.lip.pt/~david/powerfit-example.tgz here] (courtesy of Mario David)<br />
<br />
=== How to run the DisVis and PowerFit on VMs of the Federated Cloud using the enmr.eu VO (updated 14th of July 2016)===<br />
*Scripts are available to instantiate GPGPU-enabled VMs on IISAS (CentOS7 images with Tesla K20m GPUs) and CESNET (Ubuntu images with M2090 GPUs) FedCloud sites, installing latest NVIDIA drivers and DisVis and/or PowerFit software: [http://pastebin.com/5ueQRerr CESNET-create-VM.sh] and [http://pastebin.com/ARxadbn9 IISAS-create-VM.sh]<br />
* The scripts use the following user_data to contextualise the VMs in order to be ready to install the needed software: [http://pastebin.com/f48qMmsN user_data_ubuntu] and [http://pastebin.com/xCdhkTZr user_data_centos7]. Customise them by inserting your public ssh-key. <br />
*After having run the scripts above (voms-proxy-init -voms enmr.eu -r is required before executing them, from a host with an [https://wiki.egi.eu/wiki/Fedcloud-tf:CLI_Environment occi client] installed), you have to ssh in the VM with your ssh-key, then sudo su -, and then execute ./install-gpu-driver.sh. After that the VM is ready to install the application software,by executing ./install-disvis.sh and/or ./install-powerfit.sh. Scripts for running test samples are available in /home/run-disvisGPU.sh and /home/run-powerfitGPU.sh.</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Competence_centre_MoBrain&diff=89782
Competence centre MoBrain
2016-09-20T13:46:43Z
<p>Verlato: /* How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016) */</p>
<hr />
<div>{{Template:EGI-Engage_CC}} <br />
<br />
<br />
{{TOC_right}} <br />
<br />
[[Image:Mobrain.png|300px|right]]<br />
<br />
<br />
'''MoBrain: A Competence Center to Serve Translational Research from Molecule to Brain'''<br />
<br />
<br> <br />
<br> <br />
<br />
'''CC Coordinator:''' Alexandre M.J.J. Bonvin<br />
<br />
'''CC Coordinator deputy:''' Antonio Rosato<br />
<br />
'''CC members' list:''' cc-mobrain AT mailman.egi.eu <br />
<br />
'''CC meetings:''' https://indico.egi.eu/indico/categoryDisplay.py?categId=145 <br />
<br />
<br> <br />
<br />
<br />
== Introduction ==<br />
<br />
Today’s translational research era calls for innovative solutions to enable researchers and clinicians to build bridges between the microscopic (molecular) and macroscopic (human) scales. This requires building both transversal and vertical connections between techniques, researchers and clinicians to provide them with an optimal e-Science toolbox to tackle societal challenges related to health. <br />
<br />
The main objective of the MoBrain Competence Center (CC) is to lower barriers for scientists to access modern e-Science solutions from micro to macro scales. MoBrain builds on grid- and cloud-based infrastructures and on the existing expertise available within WeNMR (www.wenmr.eu), N4U (neugrid4you.eu), and technology providers (NGIs and other institutions, OSG). This initiative aims to serve its user communities, related ESFRI projects (e.g. INSTRUCT) and in the long term the Human Brain Project (FET Flagship), and strengthen the EGI services offering. <br />
<br />
By integrating molecular structural biology and medical imaging services and data, MoBrain will kick-start the development of a larger, integrated, global science virtual research environment for life and brain scientists worldwide. The mini-projects defined in MoBrain are geared toward facilitating this overall objective, each with specific objectives to reinforce existing services, develop new solutions and pave the path to global competence center and virtual research environment for translational research from molecular to brain. <br />
<br />
There are already many services and support/training mechanisms in place that will be further developed, optimized and merged during the operation of the CC, building onto and contributing to the EGI service offering. MoBrain will produce a working environment that will be better tailored to the end user needs than any of its individual components. It will provide an extended portfolio of tools and data in a user-friendly e-laboratory, with a direct relevance for neuroscience, starting from the quantification of the molecular forces, protein folding, biomolecular interactions, drug design and treatments, improved diagnostic and the full characterization of every pathological mechanism of brain diseases through both phenomenological as well as mechanistic approaches. <br />
<br />
<br />
<br />
== MoBrain partners ==<br />
<br />
*Utrecht University, Bijvoet Center for Biomolecular Research, the Netherlands <br />
*Consorzio Interuniversitario Risonanze Magnetiche Di Metalloproteine Paramagnetiche, Florence University, Italy. <br />
*Consejo Superior de Investigaciones Cientificas (and Spanish NGI) <br />
*Science and Technology Facility Council, UK <br />
*Provincia Lombardo Veneta Ordine Ospedaliero di SanGiovanni di Dio – Fatebenefratelli, Italy <br />
*GNUBILA, France <br />
*Istituto Nazionale Di Fisica Nucleare, Italy <br />
*SURFsara (Dutch NGI) <br />
*CESNET (Czech NGI) <br />
*Open Science Grid (US)<br />
<br />
The CC is open for additional members. Please email the CC coordinator to join. <br />
<br />
<br />
== Tasks ==<br />
<br />
<br />
=== T1: Cryo-EM in the cloud: bringing clouds to the data ===<br />
<br />
=== T2: GPU portals for biomolecular simulations ===<br />
<br />
=== T3: Integrating the micro- (WeNMR/INSTRUCT) and macroscopic (NeuGRID4you) virtual research communities ===<br />
<br />
=== T4: User support and training ===<br />
<br />
== Deliverables ==<br />
<br />
* D6.4 Fully integrated MoBrain web portal (OTHER), M12 (02.2016) - by T3<br />
* D6.7 Implementation and evaluation of AMBER and/or GROMACS (R), M13 (03.2016) - by T2<br />
* D6.12 GPGPU-enabled web portal(s) for MoBrain (OTHER), M16 (06.2016) - by T2&3<br />
* D6.14 Scipion cloud deployment for MoBrain (OTHER), M21 (11.2016) - by T1<br />
<br />
<br />
[[Category:EGI-Engage]]<br />
<br />
== Technical documentations ==<br />
<br />
=== How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016)===<br />
<br />
*A docker image with opencl + nvidia drivers + DisVis application and one with PowerFit application ready to run on GPU servers have been prepared for Ubuntu, with the goals of checking performance described [https://drive.google.com/file/d/0B44fOmGpGehGWVVTXzZmU3F1WlE/view here]. <br />
*In collaboration with [https://wiki.egi.eu/wiki/EGI-Engage:TASK_JRA2.4_Accelerated_Computing EGI-Engage task JRA2.4 (Accelerated Computing)] the DisVis and PowerFit docker images have been then re-build to work on the SL6 GPU servers which had a NVIDIA driver 319.x supporting only CUDA 5.5 version. These servers form the grid-enabled cluster at CIRMMP, and successful grid submissions to this cluster using the enmr.eu VO has been carried out with the expected performance the 1st of December 2015.<br />
*'''The 14th of July the CIRMMP servers were updated with the latest NVIDIA driver 352.93 supporting now CUDA 7.5 version. DisVis and PowerFit images used are now the ones maintained by the INDIGO-DataCloud project and distributed via dockerhub at [https://github.com/indigo-dc/ansible-role-disvis-powerfit indigodatacloudapps repository].''' <br />
*Here follows a description of the scripts and commands used to run the test:<br />
<pre>$ voms-proxy-init --voms enmr.eu <br />
$ glite-ce-job-submit -o jobid.txt -a -r cegpu.cerm.unifi.it:8443/cream-pbs-batch disvis.jdl</pre> <br />
where disvis.jdl is:&nbsp; <br />
<br />
[<br />
executable = "disvis.sh";<br />
inputSandbox = { "disvis.sh" ,"O14250.pdb" , "Q9UT97.pdb" , "restraints.dat" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=1;<br />
]<br />
<br />
<br />
and disvis.sh is (assuming docker engine is installed on the grid WNs): <br />
<pre>#!/bin/sh<br />
driver=$(nvidia-smi | awk '/Driver Version/ {print $6}')<br />
echo hostname=$(hostname) <br />
echo user=$(id) <br />
export WDIR=`pwd` <br />
mkdir res-gpu<br />
echo docker run disvis... <br />
echo starttime=$(date)<br />
rnd=$RANDOM <br />
docker run --name=disvis-$rnd --device=/dev/nvidia0:/dev/nvidia0 --device=/dev/nvidia1:/dev/nvidia1 \<br />
--device=/dev/nvidiactl:/dev/nvidiactl --device=/dev/nvidia-uvm:/dev/nvidia-uvm \<br />
-v $WDIR:/home indigodatacloudapps/disvis:nvdrv_$driver /bin/sh \<br />
-c 'disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/res-gpu; \<br />
nvidia-smi --query-accounted-apps=timestamp,pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv' <br />
docker rm disvis-$rnd<br />
echo endtime=$(date) <br />
tar cfz res-gpu.tgz res-gpu</pre> <br />
<br />
'''Updated the 19th of September 2016''':<br />
<br />
If docker engine is not available on the grid WNs, you can use udocker in disvis.sh:<br />
<pre>#!/bin/sh<br />
driver=$(nvidia-smi | awk '/Driver Version/ {print $6}')<br />
echo hostname=$(hostname)<br />
echo user=$(id)<br />
export WDIR=`pwd`<br />
echo udocker run disvis...<br />
echo starttime=$(date)<br />
git clone https://github.com/indigo-dc/udocker<br />
cd udocker<br />
./udocker.py pull indigodatacloudapps/disvis:nvdrv_$driver<br />
echo time after pull = $(date)<br />
rnd=$RANDOM<br />
./udocker.py create --name=disvis-$rnd indigodatacloudapps/disvis:nvdrv_$driver<br />
echo time after udocker create = $(date)<br />
mkdir $WDIR/out<br />
./udocker.py run -v /dev --volume=$WDIR:/home disvis-$rnd "disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/out; \<br />
nvidia-smi --query-accounted-apps=timestamp,pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv"<br />
echo time after udocker run = $(date)<br />
./udocker.py rm disvis-$rnd<br />
./udocker.py rmi indigodatacloudapps/disvis:nvdrv_$driver<br />
cd $WDIR<br />
tar zcvf res-gpu.tgz out/<br />
echo endtime=$(date)</pre> <br />
<br />
The example input files specified in the jdl script above were taken from the GitHub repo for disvis available from: [https://github.com/haddocking/disvis https://github.com/haddocking/disvis] <br />
<br />
The performance on the GPGPU grid resources is what is expected for the card type. <br />
<br />
The timings were compared to an in-house GPU node at Utrecht University (GTX680 card), but can differ with the last update of the DisVis code: <br />
<pre>GPGPU-type Timing[minutes]<br />
GTX680 19<br />
M2090 (VM) 15.5<br />
1xK20 (VM) 13.5<br />
2xK20 11<br />
1xK20 11</pre> <br />
This indicates that DisVis, as expected, is currently incapable of using both the available GPGPUs. However, we plan to use this testbed to parallelize it on mulitple GPUs, which should be relatively straightforward. Values marked with (VM) refers to GPGPUs hosted in the FedCloud (see next section).<br />
<br />
For running PowerFit just replace disvis.jdl with a powerfit.jdl like e.g.:&nbsp;<br />
<br />
[<br />
executable = "powerfit.sh";<br />
inputSandbox = { "powerfit.sh" ,"1046.map" , "GroES_1gru.pdb" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=1;<br />
]<br />
<br />
with powerfit.sh as [http://pastebin.com/uugj8ZQv here] and input data taken from [http://www.lip.pt/~david/powerfit-example.tgz here] (courtesy of Mario David)<br />
<br />
=== How to run the DisVis and PowerFit on VMs of the Federated Cloud using the enmr.eu VO (updated 14th of July 2016)===<br />
*Scripts are available to instantiate GPGPU-enabled VMs on IISAS (CentOS7 images with Tesla K20m GPUs) and CESNET (Ubuntu images with M2090 GPUs) FedCloud sites, installing latest NVIDIA drivers and DisVis and/or PowerFit software: [http://pastebin.com/5ueQRerr CESNET-create-VM.sh] and [http://pastebin.com/ARxadbn9 IISAS-create-VM.sh]<br />
* The scripts use the following user_data to contextualise the VMs in order to be ready to install the needed software: [http://pastebin.com/f48qMmsN user_data_ubuntu] and [http://pastebin.com/xCdhkTZr user_data_centos7]. Customise them by inserting your public ssh-key. <br />
*After having run the scripts above (voms-proxy-init -voms enmr.eu -r is required before executing them, from a host with an [https://wiki.egi.eu/wiki/Fedcloud-tf:CLI_Environment occi client] installed), you have to ssh in the VM with your ssh-key, then sudo su -, and then execute ./install-gpu-driver.sh. After that the VM is ready to install the application software,by executing ./install-disvis.sh and/or ./install-powerfit.sh. Scripts for running test samples are available in /home/run-disvisGPU.sh and /home/run-powerfitGPU.sh.</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Competence_centre_MoBrain&diff=89758
Competence centre MoBrain
2016-09-20T12:05:57Z
<p>Verlato: /* How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016) */</p>
<hr />
<div>{{Template:EGI-Engage_CC}} <br />
<br />
<br />
{{TOC_right}} <br />
<br />
[[Image:Mobrain.png|300px|right]]<br />
<br />
<br />
'''MoBrain: A Competence Center to Serve Translational Research from Molecule to Brain'''<br />
<br />
<br> <br />
<br> <br />
<br />
'''CC Coordinator:''' Alexandre M.J.J. Bonvin<br />
<br />
'''CC Coordinator deputy:''' Antonio Rosato<br />
<br />
'''CC members' list:''' cc-mobrain AT mailman.egi.eu <br />
<br />
'''CC meetings:''' https://indico.egi.eu/indico/categoryDisplay.py?categId=145 <br />
<br />
<br> <br />
<br />
<br />
== Introduction ==<br />
<br />
Today’s translational research era calls for innovative solutions to enable researchers and clinicians to build bridges between the microscopic (molecular) and macroscopic (human) scales. This requires building both transversal and vertical connections between techniques, researchers and clinicians to provide them with an optimal e-Science toolbox to tackle societal challenges related to health. <br />
<br />
The main objective of the MoBrain Competence Center (CC) is to lower barriers for scientists to access modern e-Science solutions from micro to macro scales. MoBrain builds on grid- and cloud-based infrastructures and on the existing expertise available within WeNMR (www.wenmr.eu), N4U (neugrid4you.eu), and technology providers (NGIs and other institutions, OSG). This initiative aims to serve its user communities, related ESFRI projects (e.g. INSTRUCT) and in the long term the Human Brain Project (FET Flagship), and strengthen the EGI services offering. <br />
<br />
By integrating molecular structural biology and medical imaging services and data, MoBrain will kick-start the development of a larger, integrated, global science virtual research environment for life and brain scientists worldwide. The mini-projects defined in MoBrain are geared toward facilitating this overall objective, each with specific objectives to reinforce existing services, develop new solutions and pave the path to global competence center and virtual research environment for translational research from molecular to brain. <br />
<br />
There are already many services and support/training mechanisms in place that will be further developed, optimized and merged during the operation of the CC, building onto and contributing to the EGI service offering. MoBrain will produce a working environment that will be better tailored to the end user needs than any of its individual components. It will provide an extended portfolio of tools and data in a user-friendly e-laboratory, with a direct relevance for neuroscience, starting from the quantification of the molecular forces, protein folding, biomolecular interactions, drug design and treatments, improved diagnostic and the full characterization of every pathological mechanism of brain diseases through both phenomenological as well as mechanistic approaches. <br />
<br />
<br />
<br />
== MoBrain partners ==<br />
<br />
*Utrecht University, Bijvoet Center for Biomolecular Research, the Netherlands <br />
*Consorzio Interuniversitario Risonanze Magnetiche Di Metalloproteine Paramagnetiche, Florence University, Italy. <br />
*Consejo Superior de Investigaciones Cientificas (and Spanish NGI) <br />
*Science and Technology Facility Council, UK <br />
*Provincia Lombardo Veneta Ordine Ospedaliero di SanGiovanni di Dio – Fatebenefratelli, Italy <br />
*GNUBILA, France <br />
*Istituto Nazionale Di Fisica Nucleare, Italy <br />
*SURFsara (Dutch NGI) <br />
*CESNET (Czech NGI) <br />
*Open Science Grid (US)<br />
<br />
The CC is open for additional members. Please email the CC coordinator to join. <br />
<br />
<br />
== Tasks ==<br />
<br />
<br />
=== T1: Cryo-EM in the cloud: bringing clouds to the data ===<br />
<br />
=== T2: GPU portals for biomolecular simulations ===<br />
<br />
=== T3: Integrating the micro- (WeNMR/INSTRUCT) and macroscopic (NeuGRID4you) virtual research communities ===<br />
<br />
=== T4: User support and training ===<br />
<br />
== Deliverables ==<br />
<br />
* D6.4 Fully integrated MoBrain web portal (OTHER), M12 (02.2016) - by T3<br />
* D6.7 Implementation and evaluation of AMBER and/or GROMACS (R), M13 (03.2016) - by T2<br />
* D6.12 GPGPU-enabled web portal(s) for MoBrain (OTHER), M16 (06.2016) - by T2&3<br />
* D6.14 Scipion cloud deployment for MoBrain (OTHER), M21 (11.2016) - by T1<br />
<br />
<br />
[[Category:EGI-Engage]]<br />
<br />
== Technical documentations ==<br />
<br />
=== How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016)===<br />
<br />
*A docker image with opencl + nvidia drivers + DisVis application and one with PowerFit application ready to run on GPU servers have been prepared for Ubuntu, with the goals of checking performance described [https://drive.google.com/file/d/0B44fOmGpGehGWVVTXzZmU3F1WlE/view here]. <br />
*In collaboration with [https://wiki.egi.eu/wiki/EGI-Engage:TASK_JRA2.4_Accelerated_Computing EGI-Engage task JRA2.4 (Accelerated Computing)] the DisVis and PowerFit docker images have been then re-build to work on the SL6 GPU servers which had a NVIDIA driver 319.x supporting only CUDA 5.5 version. These servers form the grid-enabled cluster at CIRMMP, and successful grid submissions to this cluster using the enmr.eu VO has been carried out with the expected performance the 1st of December.<br />
*'''The 14th of July the CIRMMP servers were updated with the latest NVIDIA driver 352.93 supporting now CUDA 7.5 version. DisVis and PowerFit images used are now the ones maintained by the INDIGO-DataCloud project and distributed via dockerhub at [https://github.com/indigo-dc/ansible-role-disvis-powerfit indigodatacloudapps repository].''' <br />
*Here follows a description of the scripts and commands used to run the test:<br />
<pre>$ voms-proxy-init --voms enmr.eu <br />
$ glite-ce-job-submit -o jobid.txt -a -r cegpu.cerm.unifi.it:8443/cream-pbs-batch disvis.jdl</pre> <br />
where disvis.jdl is:&nbsp; <br />
<br />
[<br />
executable = "disvis.sh";<br />
inputSandbox = { "disvis.sh" ,"O14250.pdb" , "Q9UT97.pdb" , "restraints.dat" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=1;<br />
]<br />
<br />
<br />
and disvis.sh is (assuming docker engine is installed on the grid WNs): <br />
<pre>#!/bin/sh<br />
driver=$(nvidia-smi | awk '/Driver Version/ {print $6}')<br />
echo hostname=$(hostname) <br />
echo user=$(id) <br />
export WDIR=`pwd` <br />
mkdir res-gpu<br />
echo docker run disvis... <br />
echo starttime=$(date)<br />
rnd=$RANDOM <br />
docker run --name=disvis-$rnd --device=/dev/nvidia0:/dev/nvidia0 --device=/dev/nvidia1:/dev/nvidia1 \<br />
--device=/dev/nvidiactl:/dev/nvidiactl --device=/dev/nvidia-uvm:/dev/nvidia-uvm \<br />
-v $WDIR:/home indigodatacloudapps/disvis:nvdrv_$driver /bin/sh \<br />
-c 'disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/res-gpu; \<br />
nvidia-smi --query-accounted-apps=timestamp,pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv' <br />
docker rm disvis-$rnd<br />
echo endtime=$(date) <br />
tar cfz res-gpu.tgz res-gpu</pre> <br />
<br />
'''Updated the 19th of September 2016''':<br />
<br />
If docker engine is not available on the grid WNs, you can use udocker in disvis.sh:<br />
<pre>#!/bin/sh<br />
driver=$(nvidia-smi | awk '/Driver Version/ {print $6}')<br />
echo hostname=$(hostname)<br />
echo user=$(id)<br />
export WDIR=`pwd`<br />
echo udocker run disvis...<br />
echo starttime=$(date)<br />
git clone https://github.com/indigo-dc/udocker<br />
cd udocker<br />
./udocker.py pull indigodatacloudapps/disvis:nvdrv_$driver<br />
echo time after pull = $(date)<br />
rnd=$RANDOM<br />
./udocker.py create --name=disvis-$rnd indigodatacloudapps/disvis:nvdrv_$driver<br />
echo time after udocker create = $(date)<br />
mkdir $WDIR/out<br />
./udocker.py run -v /dev --volume=$WDIR:/home disvis-$rnd "disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/out; \<br />
nvidia-smi --query-accounted-apps=timestamp,pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv"<br />
echo time after udocker run = $(date)<br />
./udocker.py rm disvis-$rnd<br />
./udocker.py rmi indigodatacloudapps/disvis:nvdrv_$driver<br />
cd $WDIR<br />
tar zcvf res-gpu.tgz out/<br />
echo endtime=$(date)</pre> <br />
<br />
The example input files specified in the jdl script above were taken from the GitHub repo for disvis available from: [https://github.com/haddocking/disvis https://github.com/haddocking/disvis] <br />
<br />
The performance on the GPGPU grid resources is what is expected for the card type. <br />
<br />
The timings were compared to an in-house GPU node at Utrecht University (GTX680 card), but can differ with the last update of the DisVis code: <br />
<pre>GPGPU-type Timing[minutes]<br />
GTX680 19<br />
M2090 (VM) 15.5<br />
1xK20 (VM) 13.5<br />
2xK20 11<br />
1xK20 11</pre> <br />
This indicates that DisVis, as expected, is currently incapable of using both the available GPGPUs. However, we plan to use this testbed to parallelize it on mulitple GPUs, which should be relatively straightforward. Values marked with (VM) refers to GPGPUs hosted in the FedCloud (see next section).<br />
<br />
For running PowerFit just replace disvis.jdl with a powerfit.jdl like e.g.:&nbsp;<br />
<br />
[<br />
executable = "powerfit.sh";<br />
inputSandbox = { "powerfit.sh" ,"1046.map" , "GroES_1gru.pdb" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=1;<br />
]<br />
<br />
with powerfit.sh as [http://pastebin.com/uugj8ZQv here] and input data taken from [http://www.lip.pt/~david/powerfit-example.tgz here] (courtesy of Mario David)<br />
<br />
=== How to run the DisVis and PowerFit on VMs of the Federated Cloud using the enmr.eu VO (updated 14th of July 2016)===<br />
*Scripts are available to instantiate GPGPU-enabled VMs on IISAS (CentOS7 images with Tesla K20m GPUs) and CESNET (Ubuntu images with M2090 GPUs) FedCloud sites, installing latest NVIDIA drivers and DisVis and/or PowerFit software: [http://pastebin.com/5ueQRerr CESNET-create-VM.sh] and [http://pastebin.com/ARxadbn9 IISAS-create-VM.sh]<br />
* The scripts use the following user_data to contextualise the VMs in order to be ready to install the needed software: [http://pastebin.com/f48qMmsN user_data_ubuntu] and [http://pastebin.com/xCdhkTZr user_data_centos7]. Customise them by inserting your public ssh-key. <br />
*After having run the scripts above (voms-proxy-init -voms enmr.eu -r is required before executing them, from a host with an [https://wiki.egi.eu/wiki/Fedcloud-tf:CLI_Environment occi client] installed), you have to ssh in the VM with your ssh-key, then sudo su -, and then execute ./install-gpu-driver.sh. After that the VM is ready to install the application software,by executing ./install-disvis.sh and/or ./install-powerfit.sh. Scripts for running test samples are available in /home/run-disvisGPU.sh and /home/run-powerfitGPU.sh.</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Competence_centre_MoBrain&diff=89757
Competence centre MoBrain
2016-09-20T11:42:28Z
<p>Verlato: /* How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016) */</p>
<hr />
<div>{{Template:EGI-Engage_CC}} <br />
<br />
<br />
{{TOC_right}} <br />
<br />
[[Image:Mobrain.png|300px|right]]<br />
<br />
<br />
'''MoBrain: A Competence Center to Serve Translational Research from Molecule to Brain'''<br />
<br />
<br> <br />
<br> <br />
<br />
'''CC Coordinator:''' Alexandre M.J.J. Bonvin<br />
<br />
'''CC Coordinator deputy:''' Antonio Rosato<br />
<br />
'''CC members' list:''' cc-mobrain AT mailman.egi.eu <br />
<br />
'''CC meetings:''' https://indico.egi.eu/indico/categoryDisplay.py?categId=145 <br />
<br />
<br> <br />
<br />
<br />
== Introduction ==<br />
<br />
Today’s translational research era calls for innovative solutions to enable researchers and clinicians to build bridges between the microscopic (molecular) and macroscopic (human) scales. This requires building both transversal and vertical connections between techniques, researchers and clinicians to provide them with an optimal e-Science toolbox to tackle societal challenges related to health. <br />
<br />
The main objective of the MoBrain Competence Center (CC) is to lower barriers for scientists to access modern e-Science solutions from micro to macro scales. MoBrain builds on grid- and cloud-based infrastructures and on the existing expertise available within WeNMR (www.wenmr.eu), N4U (neugrid4you.eu), and technology providers (NGIs and other institutions, OSG). This initiative aims to serve its user communities, related ESFRI projects (e.g. INSTRUCT) and in the long term the Human Brain Project (FET Flagship), and strengthen the EGI services offering. <br />
<br />
By integrating molecular structural biology and medical imaging services and data, MoBrain will kick-start the development of a larger, integrated, global science virtual research environment for life and brain scientists worldwide. The mini-projects defined in MoBrain are geared toward facilitating this overall objective, each with specific objectives to reinforce existing services, develop new solutions and pave the path to global competence center and virtual research environment for translational research from molecular to brain. <br />
<br />
There are already many services and support/training mechanisms in place that will be further developed, optimized and merged during the operation of the CC, building onto and contributing to the EGI service offering. MoBrain will produce a working environment that will be better tailored to the end user needs than any of its individual components. It will provide an extended portfolio of tools and data in a user-friendly e-laboratory, with a direct relevance for neuroscience, starting from the quantification of the molecular forces, protein folding, biomolecular interactions, drug design and treatments, improved diagnostic and the full characterization of every pathological mechanism of brain diseases through both phenomenological as well as mechanistic approaches. <br />
<br />
<br />
<br />
== MoBrain partners ==<br />
<br />
*Utrecht University, Bijvoet Center for Biomolecular Research, the Netherlands <br />
*Consorzio Interuniversitario Risonanze Magnetiche Di Metalloproteine Paramagnetiche, Florence University, Italy. <br />
*Consejo Superior de Investigaciones Cientificas (and Spanish NGI) <br />
*Science and Technology Facility Council, UK <br />
*Provincia Lombardo Veneta Ordine Ospedaliero di SanGiovanni di Dio – Fatebenefratelli, Italy <br />
*GNUBILA, France <br />
*Istituto Nazionale Di Fisica Nucleare, Italy <br />
*SURFsara (Dutch NGI) <br />
*CESNET (Czech NGI) <br />
*Open Science Grid (US)<br />
<br />
The CC is open for additional members. Please email the CC coordinator to join. <br />
<br />
<br />
== Tasks ==<br />
<br />
<br />
=== T1: Cryo-EM in the cloud: bringing clouds to the data ===<br />
<br />
=== T2: GPU portals for biomolecular simulations ===<br />
<br />
=== T3: Integrating the micro- (WeNMR/INSTRUCT) and macroscopic (NeuGRID4you) virtual research communities ===<br />
<br />
=== T4: User support and training ===<br />
<br />
== Deliverables ==<br />
<br />
* D6.4 Fully integrated MoBrain web portal (OTHER), M12 (02.2016) - by T3<br />
* D6.7 Implementation and evaluation of AMBER and/or GROMACS (R), M13 (03.2016) - by T2<br />
* D6.12 GPGPU-enabled web portal(s) for MoBrain (OTHER), M16 (06.2016) - by T2&3<br />
* D6.14 Scipion cloud deployment for MoBrain (OTHER), M21 (11.2016) - by T1<br />
<br />
<br />
[[Category:EGI-Engage]]<br />
<br />
== Technical documentations ==<br />
<br />
=== How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016)===<br />
<br />
*A docker image with opencl + nvidia drivers + DisVis application and one with PowerFit application ready to run on GPU servers have been prepared for Ubuntu, with the goals of checking performance described [https://drive.google.com/file/d/0B44fOmGpGehGWVVTXzZmU3F1WlE/view here]. <br />
*In collaboration with [https://wiki.egi.eu/wiki/EGI-Engage:TASK_JRA2.4_Accelerated_Computing EGI-Engage task JRA2.4 (Accelerated Computing)] the DisVis and PowerFit docker images have been then re-build to work on the SL6 GPU servers which had a NVIDIA driver 319.x supporting only CUDA 5.5 version. These servers form the grid-enabled cluster at CIRMMP, and successful grid submissions to this cluster using the enmr.eu VO has been carried out with the expected performance the 1st of December.<br />
*'''The 14th of July the CIRMMP servers were updated with the latest NVIDIA driver 352.93 supporting now CUDA 7.5 version. DisVis and PowerFit images used are now the ones maintained by the INDIGO-DataCloud project and distributed via dockerhub at [https://github.com/indigo-dc/ansible-role-disvis-powerfit indigodatacloudapps repository].''' <br />
*Here follows a description of the scripts and commands used to run the test:<br />
<pre>$ voms-proxy-init --voms enmr.eu <br />
$ glite-ce-job-submit -o jobid.txt -a -r cegpu.cerm.unifi.it:8443/cream-pbs-batch disvis.jdl</pre> <br />
where disvis.jdl is:&nbsp; <br />
<br />
[<br />
executable = "disvis.sh";<br />
inputSandbox = { "disvis.sh" ,"O14250.pdb" , "Q9UT97.pdb" , "restraints.dat" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=1;<br />
]<br />
<br />
<br />
and disvis.sh is (assuming docker engine is installed on the grid WNs): <br />
<pre>#!/bin/sh<br />
echo hostname=$(hostname) <br />
echo user=$(id) <br />
export WDIR=`pwd` <br />
mkdir res-gpu<br />
echo docker run disvis... <br />
echo starttime=$(date)<br />
rnd=$RANDOM <br />
docker run --name=disvis-$rnd --device=/dev/nvidia0:/dev/nvidia0 --device=/dev/nvidia1:/dev/nvidia1 \<br />
--device=/dev/nvidiactl:/dev/nvidiactl --device=/dev/nvidia-uvm:/dev/nvidia-uvm \<br />
-v $WDIR:/home indigodatacloudapps/disvis:nvdrv_352.93 /bin/sh \<br />
-c 'disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/res-gpu; \<br />
nvidia-smi --query-accounted-apps=timestamp,pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv' <br />
docker rm disvis-$rnd<br />
echo endtime=$(date) <br />
tar cfz res-gpu.tgz res-gpu</pre> <br />
<br />
'''Updated the 19th of September 2016''':<br />
<br />
If docker engine is not available on the grid WNs, you can use udocker in disvis.sh:<br />
<pre>#!/bin/sh<br />
echo hostname=$(hostname)<br />
echo user=$(id)<br />
export WDIR=`pwd`<br />
echo udocker run disvis...<br />
echo starttime=$(date)<br />
git clone https://github.com/indigo-dc/udocker<br />
cd udocker<br />
./udocker.py pull indigodatacloudapps/disvis:nvdrv_352.93<br />
echo time after pull = $(date)<br />
rnd=$RANDOM<br />
./udocker.py create --name=disvis-$rnd indigodatacloudapps/disvis:nvdrv_352.93<br />
echo time after udocker create = $(date)<br />
mkdir $WDIR/out<br />
./udocker.py run -v /dev --volume=$WDIR:/home disvis-$rnd "disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/out; \<br />
nvidia-smi --query-accounted-apps=timestamp,pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv"<br />
echo time after udocker run = $(date)<br />
./udocker.py rm disvis-$rnd<br />
./udocker.py rmi indigodatacloudapps/disvis:nvdrv_352.93<br />
cd $WDIR<br />
tar zcvf res-gpu.tgz out/<br />
echo endtime=$(date)</pre> <br />
<br />
The example input files specified in the jdl script above were taken from the GitHub repo for disvis available from: [https://github.com/haddocking/disvis https://github.com/haddocking/disvis] <br />
<br />
The performance on the GPGPU grid resources is what is expected for the card type. <br />
<br />
The timings were compared to an in-house GPU node at Utrecht University (GTX680 card), but can differ with the last update of the DisVis code: <br />
<pre>GPGPU-type Timing[minutes]<br />
GTX680 19<br />
M2090 (VM) 15.5<br />
1xK20 (VM) 13.5<br />
2xK20 11<br />
1xK20 11</pre> <br />
This indicates that DisVis, as expected, is currently incapable of using both the available GPGPUs. However, we plan to use this testbed to parallelize it on mulitple GPUs, which should be relatively straightforward. Values marked with (VM) refers to GPGPUs hosted in the FedCloud (see next section).<br />
<br />
For running PowerFit just replace disvis.jdl with a powerfit.jdl like e.g.:&nbsp;<br />
<br />
[<br />
executable = "powerfit.sh";<br />
inputSandbox = { "powerfit.sh" ,"1046.map" , "GroES_1gru.pdb" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=1;<br />
]<br />
<br />
with powerfit.sh as [http://pastebin.com/uugj8ZQv here] and input data taken from [http://www.lip.pt/~david/powerfit-example.tgz here] (courtesy of Mario David)<br />
<br />
=== How to run the DisVis and PowerFit on VMs of the Federated Cloud using the enmr.eu VO (updated 14th of July 2016)===<br />
*Scripts are available to instantiate GPGPU-enabled VMs on IISAS (CentOS7 images with Tesla K20m GPUs) and CESNET (Ubuntu images with M2090 GPUs) FedCloud sites, installing latest NVIDIA drivers and DisVis and/or PowerFit software: [http://pastebin.com/5ueQRerr CESNET-create-VM.sh] and [http://pastebin.com/ARxadbn9 IISAS-create-VM.sh]<br />
* The scripts use the following user_data to contextualise the VMs in order to be ready to install the needed software: [http://pastebin.com/f48qMmsN user_data_ubuntu] and [http://pastebin.com/xCdhkTZr user_data_centos7]. Customise them by inserting your public ssh-key. <br />
*After having run the scripts above (voms-proxy-init -voms enmr.eu -r is required before executing them, from a host with an [https://wiki.egi.eu/wiki/Fedcloud-tf:CLI_Environment occi client] installed), you have to ssh in the VM with your ssh-key, then sudo su -, and then execute ./install-gpu-driver.sh. After that the VM is ready to install the application software,by executing ./install-disvis.sh and/or ./install-powerfit.sh. Scripts for running test samples are available in /home/run-disvisGPU.sh and /home/run-powerfitGPU.sh.</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Competence_centre_MoBrain&diff=89737
Competence centre MoBrain
2016-09-19T16:11:14Z
<p>Verlato: /* How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016) */</p>
<hr />
<div>{{Template:EGI-Engage_CC}} <br />
<br />
<br />
{{TOC_right}} <br />
<br />
[[Image:Mobrain.png|300px|right]]<br />
<br />
<br />
'''MoBrain: A Competence Center to Serve Translational Research from Molecule to Brain'''<br />
<br />
<br> <br />
<br> <br />
<br />
'''CC Coordinator:''' Alexandre M.J.J. Bonvin<br />
<br />
'''CC Coordinator deputy:''' Antonio Rosato<br />
<br />
'''CC members' list:''' cc-mobrain AT mailman.egi.eu <br />
<br />
'''CC meetings:''' https://indico.egi.eu/indico/categoryDisplay.py?categId=145 <br />
<br />
<br> <br />
<br />
<br />
== Introduction ==<br />
<br />
Today’s translational research era calls for innovative solutions to enable researchers and clinicians to build bridges between the microscopic (molecular) and macroscopic (human) scales. This requires building both transversal and vertical connections between techniques, researchers and clinicians to provide them with an optimal e-Science toolbox to tackle societal challenges related to health. <br />
<br />
The main objective of the MoBrain Competence Center (CC) is to lower barriers for scientists to access modern e-Science solutions from micro to macro scales. MoBrain builds on grid- and cloud-based infrastructures and on the existing expertise available within WeNMR (www.wenmr.eu), N4U (neugrid4you.eu), and technology providers (NGIs and other institutions, OSG). This initiative aims to serve its user communities, related ESFRI projects (e.g. INSTRUCT) and in the long term the Human Brain Project (FET Flagship), and strengthen the EGI services offering. <br />
<br />
By integrating molecular structural biology and medical imaging services and data, MoBrain will kick-start the development of a larger, integrated, global science virtual research environment for life and brain scientists worldwide. The mini-projects defined in MoBrain are geared toward facilitating this overall objective, each with specific objectives to reinforce existing services, develop new solutions and pave the path to global competence center and virtual research environment for translational research from molecular to brain. <br />
<br />
There are already many services and support/training mechanisms in place that will be further developed, optimized and merged during the operation of the CC, building onto and contributing to the EGI service offering. MoBrain will produce a working environment that will be better tailored to the end user needs than any of its individual components. It will provide an extended portfolio of tools and data in a user-friendly e-laboratory, with a direct relevance for neuroscience, starting from the quantification of the molecular forces, protein folding, biomolecular interactions, drug design and treatments, improved diagnostic and the full characterization of every pathological mechanism of brain diseases through both phenomenological as well as mechanistic approaches. <br />
<br />
<br />
<br />
== MoBrain partners ==<br />
<br />
*Utrecht University, Bijvoet Center for Biomolecular Research, the Netherlands <br />
*Consorzio Interuniversitario Risonanze Magnetiche Di Metalloproteine Paramagnetiche, Florence University, Italy. <br />
*Consejo Superior de Investigaciones Cientificas (and Spanish NGI) <br />
*Science and Technology Facility Council, UK <br />
*Provincia Lombardo Veneta Ordine Ospedaliero di SanGiovanni di Dio – Fatebenefratelli, Italy <br />
*GNUBILA, France <br />
*Istituto Nazionale Di Fisica Nucleare, Italy <br />
*SURFsara (Dutch NGI) <br />
*CESNET (Czech NGI) <br />
*Open Science Grid (US)<br />
<br />
The CC is open for additional members. Please email the CC coordinator to join. <br />
<br />
<br />
== Tasks ==<br />
<br />
<br />
=== T1: Cryo-EM in the cloud: bringing clouds to the data ===<br />
<br />
=== T2: GPU portals for biomolecular simulations ===<br />
<br />
=== T3: Integrating the micro- (WeNMR/INSTRUCT) and macroscopic (NeuGRID4you) virtual research communities ===<br />
<br />
=== T4: User support and training ===<br />
<br />
== Deliverables ==<br />
<br />
* D6.4 Fully integrated MoBrain web portal (OTHER), M12 (02.2016) - by T3<br />
* D6.7 Implementation and evaluation of AMBER and/or GROMACS (R), M13 (03.2016) - by T2<br />
* D6.12 GPGPU-enabled web portal(s) for MoBrain (OTHER), M16 (06.2016) - by T2&3<br />
* D6.14 Scipion cloud deployment for MoBrain (OTHER), M21 (11.2016) - by T1<br />
<br />
<br />
[[Category:EGI-Engage]]<br />
<br />
== Technical documentations ==<br />
<br />
=== How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016)===<br />
<br />
*A docker image with opencl + nvidia drivers + DisVis application and one with PowerFit application ready to run on GPU servers have been prepared for Ubuntu, with the goals of checking performance described [https://drive.google.com/file/d/0B44fOmGpGehGWVVTXzZmU3F1WlE/view here]. <br />
*In collaboration with [https://wiki.egi.eu/wiki/EGI-Engage:TASK_JRA2.4_Accelerated_Computing EGI-Engage task JRA2.4 (Accelerated Computing)] the DisVis and PowerFit docker images have been then re-build to work on the SL6 GPU servers which had a NVIDIA driver 319.x supporting only CUDA 5.5 version. These servers form the grid-enabled cluster at CIRMMP, and successful grid submissions to this cluster using the enmr.eu VO has been carried out with the expected performance the 1st of December.<br />
*'''The 14th of July the CIRMMP servers were updated with the latest NVIDIA driver 352.93 supporting now CUDA 7.5 version. DisVis and PowerFit images used are now the ones maintained by the INDIGO-DataCloud project and distributed via dockerhub at [https://github.com/indigo-dc/ansible-role-disvis-powerfit indigodatacloudapps repository].''' <br />
*Here follows a description of the scripts and commands used to run the test:<br />
<pre>$ voms-proxy-init --voms enmr.eu <br />
$ glite-ce-job-submit -o jobid.txt -a -r cegpu.cerm.unifi.it:8443/cream-pbs-batch disvis.jdl</pre> <br />
where disvis.jdl is:&nbsp; <br />
<br />
[<br />
executable = "disvis.sh";<br />
inputSandbox = { "disvis.sh" ,"O14250.pdb" , "Q9UT97.pdb" , "restraints.dat" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=2;<br />
]<br />
<br />
<br />
and disvis.sh is (assuming docker engine is installed on the grid WNs): <br />
<pre>#!/bin/sh<br />
echo hostname=$(hostname) <br />
echo user=$(id) <br />
export WDIR=`pwd` <br />
mkdir res-gpu<br />
echo docker run disvis... <br />
echo starttime=$(date)<br />
rnd=$RANDOM <br />
docker run --name=disvis-$rnd --device=/dev/nvidia0:/dev/nvidia0 --device=/dev/nvidia1:/dev/nvidia1 \<br />
--device=/dev/nvidiactl:/dev/nvidiactl --device=/dev/nvidia-uvm:/dev/nvidia-uvm \<br />
-v $WDIR:/home indigodatacloudapps/disvis:nvdrv_352.93 /bin/sh \<br />
-c 'disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/res-gpu; \<br />
nvidia-smi --query-accounted-apps=timestamp,pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv' <br />
docker rm disvis-$rnd<br />
echo endtime=$(date) <br />
tar cfz res-gpu.tgz res-gpu</pre> <br />
<br />
'''Updated the 19th of September 2016''':<br />
<br />
If docker engine is not available on the grid WNs, you can use udocker in disvis.sh:<br />
<pre>#!/bin/sh<br />
echo hostname=$(hostname)<br />
echo user=$(id)<br />
export WDIR=`pwd`<br />
echo udocker run disvis...<br />
echo starttime=$(date)<br />
git clone https://github.com/indigo-dc/udocker<br />
cd udocker<br />
./udocker.py pull indigodatacloudapps/disvis:nvdrv_352.93<br />
echo time after pull = $(date)<br />
rnd=$RANDOM<br />
./udocker.py create --name=disvis-$rnd indigodatacloudapps/disvis:nvdrv_352.93<br />
echo time after udocker create = $(date)<br />
mkdir $WDIR/out<br />
./udocker.py run -v /dev --volume=$WDIR:/home disvis-$rnd "disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/out; \<br />
nvidia-smi --query-accounted-apps=timestamp,pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv"<br />
echo time after udocker run = $(date)<br />
./udocker.py rm disvis-$rnd<br />
./udocker.py rmi indigodatacloudapps/disvis:nvdrv_352.93<br />
cd $WDIR<br />
tar zcvf res-gpu.tgz out/<br />
echo endtime=$(date)</pre> <br />
<br />
The example input files specified in the jdl script above were taken from the GitHub repo for disvis available from: [https://github.com/haddocking/disvis https://github.com/haddocking/disvis] <br />
<br />
The performance on the GPGPU grid resources is what is expected for the card type. <br />
<br />
The timings were compared to an in-house GPU node at Utrecht University (GTX680 card), but can differ with the last update of the DisVis code: <br />
<pre>GPGPU-type Timing[minutes]<br />
GTX680 19<br />
M2090 (VM) 15.5<br />
1xK20 (VM) 13.5<br />
2xK20 11<br />
1xK20 11</pre> <br />
This indicates that DisVis, as expected, is currently incapable of using both the available GPGPUs. However, we plan to use this testbed to parallelize it on mulitple GPUs, which should be relatively straightforward. Values marked with (VM) refers to GPGPUs hosted in the FedCloud (see next section).<br />
<br />
For running PowerFit just replace disvis.jdl with a powerfit.jdl like e.g.:&nbsp;<br />
<br />
[<br />
executable = "powerfit.sh";<br />
inputSandbox = { "powerfit.sh" ,"1046.map" , "GroES_1gru.pdb" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=2;<br />
]<br />
<br />
with powerfit.sh as [http://pastebin.com/uugj8ZQv here] and input data taken from [http://www.lip.pt/~david/powerfit-example.tgz here] (courtesy of Mario David)<br />
<br />
=== How to run the DisVis and PowerFit on VMs of the Federated Cloud using the enmr.eu VO (updated 14th of July 2016)===<br />
*Scripts are available to instantiate GPGPU-enabled VMs on IISAS (CentOS7 images with Tesla K20m GPUs) and CESNET (Ubuntu images with M2090 GPUs) FedCloud sites, installing latest NVIDIA drivers and DisVis and/or PowerFit software: [http://pastebin.com/5ueQRerr CESNET-create-VM.sh] and [http://pastebin.com/ARxadbn9 IISAS-create-VM.sh]<br />
* The scripts use the following user_data to contextualise the VMs in order to be ready to install the needed software: [http://pastebin.com/f48qMmsN user_data_ubuntu] and [http://pastebin.com/xCdhkTZr user_data_centos7]. Customise them by inserting your public ssh-key. <br />
*After having run the scripts above (voms-proxy-init -voms enmr.eu -r is required before executing them, from a host with an [https://wiki.egi.eu/wiki/Fedcloud-tf:CLI_Environment occi client] installed), you have to ssh in the VM with your ssh-key, then sudo su -, and then execute ./install-gpu-driver.sh. After that the VM is ready to install the application software,by executing ./install-disvis.sh and/or ./install-powerfit.sh. Scripts for running test samples are available in /home/run-disvisGPU.sh and /home/run-powerfitGPU.sh.</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Competence_centre_MoBrain&diff=89736
Competence centre MoBrain
2016-09-19T16:08:44Z
<p>Verlato: /* How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016) */</p>
<hr />
<div>{{Template:EGI-Engage_CC}} <br />
<br />
<br />
{{TOC_right}} <br />
<br />
[[Image:Mobrain.png|300px|right]]<br />
<br />
<br />
'''MoBrain: A Competence Center to Serve Translational Research from Molecule to Brain'''<br />
<br />
<br> <br />
<br> <br />
<br />
'''CC Coordinator:''' Alexandre M.J.J. Bonvin<br />
<br />
'''CC Coordinator deputy:''' Antonio Rosato<br />
<br />
'''CC members' list:''' cc-mobrain AT mailman.egi.eu <br />
<br />
'''CC meetings:''' https://indico.egi.eu/indico/categoryDisplay.py?categId=145 <br />
<br />
<br> <br />
<br />
<br />
== Introduction ==<br />
<br />
Today’s translational research era calls for innovative solutions to enable researchers and clinicians to build bridges between the microscopic (molecular) and macroscopic (human) scales. This requires building both transversal and vertical connections between techniques, researchers and clinicians to provide them with an optimal e-Science toolbox to tackle societal challenges related to health. <br />
<br />
The main objective of the MoBrain Competence Center (CC) is to lower barriers for scientists to access modern e-Science solutions from micro to macro scales. MoBrain builds on grid- and cloud-based infrastructures and on the existing expertise available within WeNMR (www.wenmr.eu), N4U (neugrid4you.eu), and technology providers (NGIs and other institutions, OSG). This initiative aims to serve its user communities, related ESFRI projects (e.g. INSTRUCT) and in the long term the Human Brain Project (FET Flagship), and strengthen the EGI services offering. <br />
<br />
By integrating molecular structural biology and medical imaging services and data, MoBrain will kick-start the development of a larger, integrated, global science virtual research environment for life and brain scientists worldwide. The mini-projects defined in MoBrain are geared toward facilitating this overall objective, each with specific objectives to reinforce existing services, develop new solutions and pave the path to global competence center and virtual research environment for translational research from molecular to brain. <br />
<br />
There are already many services and support/training mechanisms in place that will be further developed, optimized and merged during the operation of the CC, building onto and contributing to the EGI service offering. MoBrain will produce a working environment that will be better tailored to the end user needs than any of its individual components. It will provide an extended portfolio of tools and data in a user-friendly e-laboratory, with a direct relevance for neuroscience, starting from the quantification of the molecular forces, protein folding, biomolecular interactions, drug design and treatments, improved diagnostic and the full characterization of every pathological mechanism of brain diseases through both phenomenological as well as mechanistic approaches. <br />
<br />
<br />
<br />
== MoBrain partners ==<br />
<br />
*Utrecht University, Bijvoet Center for Biomolecular Research, the Netherlands <br />
*Consorzio Interuniversitario Risonanze Magnetiche Di Metalloproteine Paramagnetiche, Florence University, Italy. <br />
*Consejo Superior de Investigaciones Cientificas (and Spanish NGI) <br />
*Science and Technology Facility Council, UK <br />
*Provincia Lombardo Veneta Ordine Ospedaliero di SanGiovanni di Dio – Fatebenefratelli, Italy <br />
*GNUBILA, France <br />
*Istituto Nazionale Di Fisica Nucleare, Italy <br />
*SURFsara (Dutch NGI) <br />
*CESNET (Czech NGI) <br />
*Open Science Grid (US)<br />
<br />
The CC is open for additional members. Please email the CC coordinator to join. <br />
<br />
<br />
== Tasks ==<br />
<br />
<br />
=== T1: Cryo-EM in the cloud: bringing clouds to the data ===<br />
<br />
=== T2: GPU portals for biomolecular simulations ===<br />
<br />
=== T3: Integrating the micro- (WeNMR/INSTRUCT) and macroscopic (NeuGRID4you) virtual research communities ===<br />
<br />
=== T4: User support and training ===<br />
<br />
== Deliverables ==<br />
<br />
* D6.4 Fully integrated MoBrain web portal (OTHER), M12 (02.2016) - by T3<br />
* D6.7 Implementation and evaluation of AMBER and/or GROMACS (R), M13 (03.2016) - by T2<br />
* D6.12 GPGPU-enabled web portal(s) for MoBrain (OTHER), M16 (06.2016) - by T2&3<br />
* D6.14 Scipion cloud deployment for MoBrain (OTHER), M21 (11.2016) - by T1<br />
<br />
<br />
[[Category:EGI-Engage]]<br />
<br />
== Technical documentations ==<br />
<br />
=== How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016)===<br />
<br />
*A docker image with opencl + nvidia drivers + DisVis application and one with PowerFit application ready to run on GPU servers have been prepared for Ubuntu, with the goals of checking performance described [https://drive.google.com/file/d/0B44fOmGpGehGWVVTXzZmU3F1WlE/view here]. <br />
*In collaboration with [https://wiki.egi.eu/wiki/EGI-Engage:TASK_JRA2.4_Accelerated_Computing EGI-Engage task JRA2.4 (Accelerated Computing)] the DisVis and PowerFit docker images have been then re-build to work on the SL6 GPU servers which had a NVIDIA driver 319.x supporting only CUDA 5.5 version. These servers form the grid-enabled cluster at CIRMMP, and successful grid submissions to this cluster using the enmr.eu VO has been carried out with the expected performance the 1st of December.<br />
*'''The 14th of July the CIRMMP servers were updated with the latest NVIDIA driver 352.93 supporting now CUDA 7.5 version. DisVis and PowerFit images used are now the ones maintained by the INDIGO-DataCloud project and distributed via dockerhub at [https://github.com/indigo-dc/ansible-role-disvis-powerfit indigodatacloudapps repository].''' <br />
*Here follows a description of the scripts and commands used to run the test:<br />
<pre>$ voms-proxy-init --voms enmr.eu <br />
$ glite-ce-job-submit -o jobid.txt -a -r cegpu.cerm.unifi.it:8443/cream-pbs-batch disvis.jdl</pre> <br />
where disvis.jdl is:&nbsp; <br />
<br />
[<br />
executable = "disvis.sh";<br />
inputSandbox = { "disvis.sh" ,"O14250.pdb" , "Q9UT97.pdb" , "restraints.dat" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=2;<br />
]<br />
<br />
<br />
and disvis.sh is (assuming docker engine is installed on the grid WNs): <br />
<pre>#!/bin/sh<br />
echo hostname=$(hostname) <br />
echo user=$(id) <br />
export WDIR=`pwd` <br />
mkdir res-gpu<br />
echo docker run disvis... <br />
echo starttime=$(date)<br />
rnd=$RANDOM <br />
docker run --name=disvis-$rnd --device=/dev/nvidia0:/dev/nvidia0 --device=/dev/nvidia1:/dev/nvidia1 \<br />
--device=/dev/nvidiactl:/dev/nvidiactl --device=/dev/nvidia-uvm:/dev/nvidia-uvm \<br />
-v $WDIR:/home indigodatacloudapps/disvis:nvdrv_352.93 /bin/sh \<br />
-c 'disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/res-gpu; \<br />
nvidia-smi --query-accounted-apps=timestamp,pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv' <br />
docker rm disvis-$rnd<br />
echo endtime=$(date) <br />
tar cfz res-gpu.tgz res-gpu</pre> <br />
<br />
'''Updated the 19th of September 2016''':<br />
<br />
If docker engine is not available on the grid WNs, you can use udocker in disvis.sh:<br />
<pre>#!/bin/sh<br />
echo hostname=$(hostname)<br />
echo user=$(id)<br />
export WDIR=`pwd`<br />
echo udocker run disvis...<br />
echo starttime=$(date)<br />
git clone https://github.com/indigo-dc/udocker<br />
cd udocker<br />
./udocker.py pull indigodatacloudapps/disvis:nvdrv_352.93<br />
echo time after pull = $(date)<br />
rnd=$RANDOM<br />
./udocker.py create --name=disvis-$rnd indigodatacloudapps/disvis:nvdrv_352.93<br />
echo time after udocker create = $(date)<br />
mkdir $WDIR/out<br />
./udocker.py run -v /dev --volume=$WDIR:/home disvis-$rnd "disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/out; \<br />
nvidia-smi --query-accounted-apps=timestamp,pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv"<br />
echo time after udocker run = $(date)<br />
./udocker.py rm disvis-$rnd<br />
./udocker.py rmi indigodatacloudapps/disvis:nvdrv_352.93<br />
cd $WDIR<br />
tar zcvf res-gpu.tgz out/<br />
echo endtime=$(date)</pre> <br />
<br />
The example input files specified in the jdl script above were taken from the GitHub repo for disvis available from: [https://github.com/haddocking/disvis https://github.com/haddocking/disvis] <br />
<br />
The performance on the GPGPU grid resources is what is expected for the card type. <br />
<br />
The timings were compared to an in-house GPU node at Utrecht University (GTX680 card): <br />
<pre>GPGPU-type Timing[minutes]<br />
GTX680 19<br />
M2090 (VM) 15.5<br />
1xK20 (VM) 13.5<br />
2xK20 11<br />
1xK20 11</pre> <br />
This indicates that DisVis, as expected, is currently incapable of using both the available GPGPUs. However, we plan to use this testbed to parallelize it on mulitple GPUs, which should be relatively straightforward. Values marked with (VM) refers to GPGPUs hosted in the FedCloud (see next section).<br />
<br />
For running PowerFit just replace disvis.jdl with a powerfit.jdl like e.g.:&nbsp;<br />
<br />
[<br />
executable = "powerfit.sh";<br />
inputSandbox = { "powerfit.sh" ,"1046.map" , "GroES_1gru.pdb" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=2;<br />
]<br />
<br />
with powerfit.sh as [http://pastebin.com/uugj8ZQv here] and input data taken from [http://www.lip.pt/~david/powerfit-example.tgz here] (courtesy of Mario David)<br />
<br />
=== How to run the DisVis and PowerFit on VMs of the Federated Cloud using the enmr.eu VO (updated 14th of July 2016)===<br />
*Scripts are available to instantiate GPGPU-enabled VMs on IISAS (CentOS7 images with Tesla K20m GPUs) and CESNET (Ubuntu images with M2090 GPUs) FedCloud sites, installing latest NVIDIA drivers and DisVis and/or PowerFit software: [http://pastebin.com/5ueQRerr CESNET-create-VM.sh] and [http://pastebin.com/ARxadbn9 IISAS-create-VM.sh]<br />
* The scripts use the following user_data to contextualise the VMs in order to be ready to install the needed software: [http://pastebin.com/f48qMmsN user_data_ubuntu] and [http://pastebin.com/xCdhkTZr user_data_centos7]. Customise them by inserting your public ssh-key. <br />
*After having run the scripts above (voms-proxy-init -voms enmr.eu -r is required before executing them, from a host with an [https://wiki.egi.eu/wiki/Fedcloud-tf:CLI_Environment occi client] installed), you have to ssh in the VM with your ssh-key, then sudo su -, and then execute ./install-gpu-driver.sh. After that the VM is ready to install the application software,by executing ./install-disvis.sh and/or ./install-powerfit.sh. Scripts for running test samples are available in /home/run-disvisGPU.sh and /home/run-powerfitGPU.sh.</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Competence_centre_MoBrain&diff=89735
Competence centre MoBrain
2016-09-19T16:04:27Z
<p>Verlato: /* How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016) */</p>
<hr />
<div>{{Template:EGI-Engage_CC}} <br />
<br />
<br />
{{TOC_right}} <br />
<br />
[[Image:Mobrain.png|300px|right]]<br />
<br />
<br />
'''MoBrain: A Competence Center to Serve Translational Research from Molecule to Brain'''<br />
<br />
<br> <br />
<br> <br />
<br />
'''CC Coordinator:''' Alexandre M.J.J. Bonvin<br />
<br />
'''CC Coordinator deputy:''' Antonio Rosato<br />
<br />
'''CC members' list:''' cc-mobrain AT mailman.egi.eu <br />
<br />
'''CC meetings:''' https://indico.egi.eu/indico/categoryDisplay.py?categId=145 <br />
<br />
<br> <br />
<br />
<br />
== Introduction ==<br />
<br />
Today’s translational research era calls for innovative solutions to enable researchers and clinicians to build bridges between the microscopic (molecular) and macroscopic (human) scales. This requires building both transversal and vertical connections between techniques, researchers and clinicians to provide them with an optimal e-Science toolbox to tackle societal challenges related to health. <br />
<br />
The main objective of the MoBrain Competence Center (CC) is to lower barriers for scientists to access modern e-Science solutions from micro to macro scales. MoBrain builds on grid- and cloud-based infrastructures and on the existing expertise available within WeNMR (www.wenmr.eu), N4U (neugrid4you.eu), and technology providers (NGIs and other institutions, OSG). This initiative aims to serve its user communities, related ESFRI projects (e.g. INSTRUCT) and in the long term the Human Brain Project (FET Flagship), and strengthen the EGI services offering. <br />
<br />
By integrating molecular structural biology and medical imaging services and data, MoBrain will kick-start the development of a larger, integrated, global science virtual research environment for life and brain scientists worldwide. The mini-projects defined in MoBrain are geared toward facilitating this overall objective, each with specific objectives to reinforce existing services, develop new solutions and pave the path to global competence center and virtual research environment for translational research from molecular to brain. <br />
<br />
There are already many services and support/training mechanisms in place that will be further developed, optimized and merged during the operation of the CC, building onto and contributing to the EGI service offering. MoBrain will produce a working environment that will be better tailored to the end user needs than any of its individual components. It will provide an extended portfolio of tools and data in a user-friendly e-laboratory, with a direct relevance for neuroscience, starting from the quantification of the molecular forces, protein folding, biomolecular interactions, drug design and treatments, improved diagnostic and the full characterization of every pathological mechanism of brain diseases through both phenomenological as well as mechanistic approaches. <br />
<br />
<br />
<br />
== MoBrain partners ==<br />
<br />
*Utrecht University, Bijvoet Center for Biomolecular Research, the Netherlands <br />
*Consorzio Interuniversitario Risonanze Magnetiche Di Metalloproteine Paramagnetiche, Florence University, Italy. <br />
*Consejo Superior de Investigaciones Cientificas (and Spanish NGI) <br />
*Science and Technology Facility Council, UK <br />
*Provincia Lombardo Veneta Ordine Ospedaliero di SanGiovanni di Dio – Fatebenefratelli, Italy <br />
*GNUBILA, France <br />
*Istituto Nazionale Di Fisica Nucleare, Italy <br />
*SURFsara (Dutch NGI) <br />
*CESNET (Czech NGI) <br />
*Open Science Grid (US)<br />
<br />
The CC is open for additional members. Please email the CC coordinator to join. <br />
<br />
<br />
== Tasks ==<br />
<br />
<br />
=== T1: Cryo-EM in the cloud: bringing clouds to the data ===<br />
<br />
=== T2: GPU portals for biomolecular simulations ===<br />
<br />
=== T3: Integrating the micro- (WeNMR/INSTRUCT) and macroscopic (NeuGRID4you) virtual research communities ===<br />
<br />
=== T4: User support and training ===<br />
<br />
== Deliverables ==<br />
<br />
* D6.4 Fully integrated MoBrain web portal (OTHER), M12 (02.2016) - by T3<br />
* D6.7 Implementation and evaluation of AMBER and/or GROMACS (R), M13 (03.2016) - by T2<br />
* D6.12 GPGPU-enabled web portal(s) for MoBrain (OTHER), M16 (06.2016) - by T2&3<br />
* D6.14 Scipion cloud deployment for MoBrain (OTHER), M21 (11.2016) - by T1<br />
<br />
<br />
[[Category:EGI-Engage]]<br />
<br />
== Technical documentations ==<br />
<br />
=== How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016)===<br />
<br />
*A docker image with opencl + nvidia drivers + DisVis application and one with PowerFit application ready to run on GPU servers have been prepared for Ubuntu, with the goals of checking performance described [https://drive.google.com/file/d/0B44fOmGpGehGWVVTXzZmU3F1WlE/view here]. <br />
*In collaboration with [https://wiki.egi.eu/wiki/EGI-Engage:TASK_JRA2.4_Accelerated_Computing EGI-Engage task JRA2.4 (Accelerated Computing)] the DisVis and PowerFit docker images have been then re-build to work on the SL6 GPU servers which had a NVIDIA driver 319.x supporting only CUDA 5.5 version. These servers form the grid-enabled cluster at CIRMMP, and successful grid submissions to this cluster using the enmr.eu VO has been carried out with the expected performance the 1st of December.<br />
*'''The 14th of July the CIRMMP servers were updated with the latest NVIDIA driver 352.93 supporting now CUDA 7.5 version. DisVis and PowerFit images used are now the ones maintained by the INDIGO-DataCloud project and distributed via dockerhub at [https://github.com/indigo-dc/ansible-role-disvis-powerfit indigodatacloudapps repository].''' <br />
*Here follows a description of the scripts and commands used to run the test:<br />
<pre>$ voms-proxy-init --voms enmr.eu <br />
$ glite-ce-job-submit -o jobid.txt -a -r cegpu.cerm.unifi.it:8443/cream-pbs-batch disvis.jdl</pre> <br />
where disvis.jdl is:&nbsp; <br />
<br />
[<br />
executable = "disvis.sh";<br />
inputSandbox = { "disvis.sh" ,"O14250.pdb" , "Q9UT97.pdb" , "restraints.dat" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=2;<br />
]<br />
<br />
<br />
and disvis.sh is (assuming docker engine is installed on the grid WNs): <br />
<pre>#!/bin/sh<br />
echo hostname=$(hostname) <br />
echo user=$(id) <br />
export WDIR=`pwd` <br />
mkdir res-gpu<br />
echo docker run disvis... <br />
echo starttime=$(date)<br />
rnd=$RANDOM <br />
docker run --name=disvis-$rnd --device=/dev/nvidia0:/dev/nvidia0 --device=/dev/nvidia1:/dev/nvidia1 \<br />
--device=/dev/nvidiactl:/dev/nvidiactl --device=/dev/nvidia-uvm:/dev/nvidia-uvm \<br />
-v $WDIR:/home indigodatacloudapps/disvis:nvdrv_352.93 /bin/sh \<br />
-c 'disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/res-gpu; nvidia-smi --query-accounted-apps=timestamp,pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv' <br />
docker rm disvis-$rnd<br />
echo endtime=$(date) <br />
tar cfz res-gpu.tgz res-gpu</pre> <br />
<br />
'''Updated the 19th of September 2016''':<br />
<br />
If docker engine is not available on the grid WNs, you can use udocker in disvis.sh:<br />
<pre>#!/bin/sh<br />
echo hostname=$(hostname)<br />
echo user=$(id)<br />
export WDIR=`pwd`<br />
echo udocker run disvis...<br />
echo starttime=$(date)<br />
git clone https://github.com/indigo-dc/udocker<br />
cd udocker<br />
./udocker.py pull indigodatacloudapps/disvis:nvdrv_352.93<br />
echo time after pull = $(date)<br />
rnd=$RANDOM<br />
./udocker.py create --name=disvis-$rnd indigodatacloudapps/disvis:nvdrv_352.93<br />
echo time after udocker create = $(date)<br />
mkdir $WDIR/out<br />
./udocker.py run -v /dev --volume=$WDIR:/home disvis-$rnd "disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/out; nvidia-smi --query-accounted-apps=timestamp,pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv"<br />
echo time after udocker run = $(date)<br />
./udocker.py rm disvis-$rnd<br />
./udocker.py rmi indigodatacloudapps/disvis:nvdrv_352.93<br />
cd $WDIR<br />
tar zcvf res-gpu.tgz out/<br />
echo endtime=$(date)</pre> <br />
<br />
The example input files specified in the jdl script above were taken from the GitHub repo for disvis available from: [https://github.com/haddocking/disvis https://github.com/haddocking/disvis] <br />
<br />
The performance on the GPGPU grid resources is what is expected for the card type. <br />
<br />
The timings were compared to an in-house GPU node at Utrecht University (GTX680 card): <br />
<pre>GPGPU-type Timing[minutes]<br />
GTX680 19<br />
M2090 (VM) 15.5<br />
1xK20 (VM) 13.5<br />
2xK20 11<br />
1xK20 11</pre> <br />
This indicates that DisVis, as expected, is currently incapable of using both the available GPGPUs. However, we plan to use this testbed to parallelize it on mulitple GPUs, which should be relatively straightforward. Values marked with (VM) refers to GPGPUs hosted in the FedCloud (see next section).<br />
<br />
For running PowerFit just replace disvis.jdl with a powerfit.jdl like e.g.:&nbsp;<br />
<br />
[<br />
executable = "powerfit.sh";<br />
inputSandbox = { "powerfit.sh" ,"1046.map" , "GroES_1gru.pdb" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=2;<br />
]<br />
<br />
with powerfit.sh as [http://pastebin.com/uugj8ZQv here] and input data taken from [http://www.lip.pt/~david/powerfit-example.tgz here] (courtesy of Mario David)<br />
<br />
=== How to run the DisVis and PowerFit on VMs of the Federated Cloud using the enmr.eu VO (updated 14th of July 2016)===<br />
*Scripts are available to instantiate GPGPU-enabled VMs on IISAS (CentOS7 images with Tesla K20m GPUs) and CESNET (Ubuntu images with M2090 GPUs) FedCloud sites, installing latest NVIDIA drivers and DisVis and/or PowerFit software: [http://pastebin.com/5ueQRerr CESNET-create-VM.sh] and [http://pastebin.com/ARxadbn9 IISAS-create-VM.sh]<br />
* The scripts use the following user_data to contextualise the VMs in order to be ready to install the needed software: [http://pastebin.com/f48qMmsN user_data_ubuntu] and [http://pastebin.com/xCdhkTZr user_data_centos7]. Customise them by inserting your public ssh-key. <br />
*After having run the scripts above (voms-proxy-init -voms enmr.eu -r is required before executing them, from a host with an [https://wiki.egi.eu/wiki/Fedcloud-tf:CLI_Environment occi client] installed), you have to ssh in the VM with your ssh-key, then sudo su -, and then execute ./install-gpu-driver.sh. After that the VM is ready to install the application software,by executing ./install-disvis.sh and/or ./install-powerfit.sh. Scripts for running test samples are available in /home/run-disvisGPU.sh and /home/run-powerfitGPU.sh.</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Competence_centre_MoBrain&diff=89734
Competence centre MoBrain
2016-09-19T13:31:19Z
<p>Verlato: /* How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016) */</p>
<hr />
<div>{{Template:EGI-Engage_CC}} <br />
<br />
<br />
{{TOC_right}} <br />
<br />
[[Image:Mobrain.png|300px|right]]<br />
<br />
<br />
'''MoBrain: A Competence Center to Serve Translational Research from Molecule to Brain'''<br />
<br />
<br> <br />
<br> <br />
<br />
'''CC Coordinator:''' Alexandre M.J.J. Bonvin<br />
<br />
'''CC Coordinator deputy:''' Antonio Rosato<br />
<br />
'''CC members' list:''' cc-mobrain AT mailman.egi.eu <br />
<br />
'''CC meetings:''' https://indico.egi.eu/indico/categoryDisplay.py?categId=145 <br />
<br />
<br> <br />
<br />
<br />
== Introduction ==<br />
<br />
Today’s translational research era calls for innovative solutions to enable researchers and clinicians to build bridges between the microscopic (molecular) and macroscopic (human) scales. This requires building both transversal and vertical connections between techniques, researchers and clinicians to provide them with an optimal e-Science toolbox to tackle societal challenges related to health. <br />
<br />
The main objective of the MoBrain Competence Center (CC) is to lower barriers for scientists to access modern e-Science solutions from micro to macro scales. MoBrain builds on grid- and cloud-based infrastructures and on the existing expertise available within WeNMR (www.wenmr.eu), N4U (neugrid4you.eu), and technology providers (NGIs and other institutions, OSG). This initiative aims to serve its user communities, related ESFRI projects (e.g. INSTRUCT) and in the long term the Human Brain Project (FET Flagship), and strengthen the EGI services offering. <br />
<br />
By integrating molecular structural biology and medical imaging services and data, MoBrain will kick-start the development of a larger, integrated, global science virtual research environment for life and brain scientists worldwide. The mini-projects defined in MoBrain are geared toward facilitating this overall objective, each with specific objectives to reinforce existing services, develop new solutions and pave the path to global competence center and virtual research environment for translational research from molecular to brain. <br />
<br />
There are already many services and support/training mechanisms in place that will be further developed, optimized and merged during the operation of the CC, building onto and contributing to the EGI service offering. MoBrain will produce a working environment that will be better tailored to the end user needs than any of its individual components. It will provide an extended portfolio of tools and data in a user-friendly e-laboratory, with a direct relevance for neuroscience, starting from the quantification of the molecular forces, protein folding, biomolecular interactions, drug design and treatments, improved diagnostic and the full characterization of every pathological mechanism of brain diseases through both phenomenological as well as mechanistic approaches. <br />
<br />
<br />
<br />
== MoBrain partners ==<br />
<br />
*Utrecht University, Bijvoet Center for Biomolecular Research, the Netherlands <br />
*Consorzio Interuniversitario Risonanze Magnetiche Di Metalloproteine Paramagnetiche, Florence University, Italy. <br />
*Consejo Superior de Investigaciones Cientificas (and Spanish NGI) <br />
*Science and Technology Facility Council, UK <br />
*Provincia Lombardo Veneta Ordine Ospedaliero di SanGiovanni di Dio – Fatebenefratelli, Italy <br />
*GNUBILA, France <br />
*Istituto Nazionale Di Fisica Nucleare, Italy <br />
*SURFsara (Dutch NGI) <br />
*CESNET (Czech NGI) <br />
*Open Science Grid (US)<br />
<br />
The CC is open for additional members. Please email the CC coordinator to join. <br />
<br />
<br />
== Tasks ==<br />
<br />
<br />
=== T1: Cryo-EM in the cloud: bringing clouds to the data ===<br />
<br />
=== T2: GPU portals for biomolecular simulations ===<br />
<br />
=== T3: Integrating the micro- (WeNMR/INSTRUCT) and macroscopic (NeuGRID4you) virtual research communities ===<br />
<br />
=== T4: User support and training ===<br />
<br />
== Deliverables ==<br />
<br />
* D6.4 Fully integrated MoBrain web portal (OTHER), M12 (02.2016) - by T3<br />
* D6.7 Implementation and evaluation of AMBER and/or GROMACS (R), M13 (03.2016) - by T2<br />
* D6.12 GPGPU-enabled web portal(s) for MoBrain (OTHER), M16 (06.2016) - by T2&3<br />
* D6.14 Scipion cloud deployment for MoBrain (OTHER), M21 (11.2016) - by T1<br />
<br />
<br />
[[Category:EGI-Engage]]<br />
<br />
== Technical documentations ==<br />
<br />
=== How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016)===<br />
<br />
*A docker image with opencl + nvidia drivers + DisVis application and one with PowerFit application ready to run on GPU servers have been prepared for Ubuntu, with the goals of checking performance described [https://drive.google.com/file/d/0B44fOmGpGehGWVVTXzZmU3F1WlE/view here]. <br />
*In collaboration with [https://wiki.egi.eu/wiki/EGI-Engage:TASK_JRA2.4_Accelerated_Computing EGI-Engage task JRA2.4 (Accelerated Computing)] the DisVis and PowerFit docker images have been then re-build to work on the SL6 GPU servers which had a NVIDIA driver 319.x supporting only CUDA 5.5 version. These servers form the grid-enabled cluster at CIRMMP, and successful grid submissions to this cluster using the enmr.eu VO has been carried out with the expected performance the 1st of December.<br />
*'''The 14th of July the CIRMMP servers were updated with the latest NVIDIA driver 352.93 supporting now CUDA 7.5 version. DisVis and PowerFit images used are now the ones maintained by the INDIGO-DataCloud project and distributed via dockerhub at [https://github.com/indigo-dc/ansible-role-disvis-powerfit indigodatacloudapps repository].''' <br />
*Here follows a description of the scripts and commands used to run the test:<br />
<pre>$ voms-proxy-init --voms enmr.eu <br />
$ glite-ce-job-submit -o jobid.txt -a -r cegpu.cerm.unifi.it:8443/cream-pbs-batch disvis.jdl</pre> <br />
where disvis.jdl is:&nbsp; <br />
<br />
[<br />
executable = "disvis.sh";<br />
inputSandbox = { "disvis.sh" ,"O14250.pdb" , "Q9UT97.pdb" , "restraints.dat" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=2;<br />
]<br />
<br />
<br />
and disvis.sh is (assuming docker engine is installed on the grid WNs): <br />
<pre>#!/bin/sh<br />
echo hostname=$(hostname) <br />
echo user=$(id) <br />
export WDIR=`pwd` <br />
mkdir res-gpu<br />
echo docker run disvis... <br />
echo starttime=$(date)<br />
rnd=$RANDOM <br />
docker run --name=disvis-$rnd --device=/dev/nvidia0:/dev/nvidia0 --device=/dev/nvidia1:/dev/nvidia1 \<br />
--device=/dev/nvidiactl:/dev/nvidiactl --device=/dev/nvidia-uvm:/dev/nvidia-uvm \<br />
-v $WDIR:/home indigodatacloudapps/disvis:nvdrv_352.93 /bin/sh \<br />
-c 'disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/res-gpu' <br />
docker rm disvis-$rnd<br />
echo endtime=$(date) <br />
nvidia-smi --query-accounted-apps=pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv<br />
tar cfz res-gpu.tgz res-gpu</pre> <br />
<br />
'''Updated the 19th of September 2016''':<br />
<br />
If docker engine is not available on the grid WNs, you can use udocker in disvis.sh:<br />
<pre>#!/bin/sh<br />
echo hostname=$(hostname)<br />
echo user=$(id)<br />
export WDIR=`pwd`<br />
echo udocker run disvis...<br />
echo starttime=$(date)<br />
git clone https://github.com/indigo-dc/udocker<br />
cd udocker<br />
./udocker.py pull indigodatacloudapps/disvis:nvdrv_352.93<br />
echo time after pull = $(date)<br />
rnd=$RANDOM<br />
./udocker.py create --name=disvis-$rnd indigodatacloudapps/disvis:nvdrv_352.93<br />
echo time after udocker create = $(date)<br />
mkdir $WDIR/out<br />
./udocker.py run -v /dev --volume=$WDIR:/home disvis-$rnd disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/out<br />
echo time after udocker run = $(date)<br />
./udocker.py rm disvis-$rnd<br />
./udocker.py rmi indigodatacloudapps/disvis:nvdrv_352.93<br />
echo time after udocker rm and rmi = $(date)<br />
nvidia-smi --query-accounted-apps=timestamp,pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv<br />
cd $WDIR<br />
tar zcvf res-gpu.tgz out/<br />
echo endtime=$(date)</pre> <br />
<br />
The example input files specified in the jdl script above were taken from the GitHub repo for disvis available from: [https://github.com/haddocking/disvis https://github.com/haddocking/disvis] <br />
<br />
The performance on the GPGPU grid resources is what is expected for the card type. <br />
<br />
The timings were compared to an in-house GPU node at Utrecht University (GTX680 card): <br />
<pre>GPGPU-type Timing[minutes]<br />
GTX680 19<br />
M2090 (VM) 15.5<br />
1xK20 (VM) 13.5<br />
2xK20 11<br />
1xK20 11</pre> <br />
This indicates that DisVis, as expected, is currently incapable of using both the available GPGPUs. However, we plan to use this testbed to parallelize it on mulitple GPUs, which should be relatively straightforward. Values marked with (VM) refers to GPGPUs hosted in the FedCloud (see next section).<br />
<br />
For running PowerFit just replace disvis.jdl with a powerfit.jdl like e.g.:&nbsp;<br />
<br />
[<br />
executable = "powerfit.sh";<br />
inputSandbox = { "powerfit.sh" ,"1046.map" , "GroES_1gru.pdb" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=2;<br />
]<br />
<br />
with powerfit.sh as [http://pastebin.com/uugj8ZQv here] and input data taken from [http://www.lip.pt/~david/powerfit-example.tgz here] (courtesy of Mario David)<br />
<br />
=== How to run the DisVis and PowerFit on VMs of the Federated Cloud using the enmr.eu VO (updated 14th of July 2016)===<br />
*Scripts are available to instantiate GPGPU-enabled VMs on IISAS (CentOS7 images with Tesla K20m GPUs) and CESNET (Ubuntu images with M2090 GPUs) FedCloud sites, installing latest NVIDIA drivers and DisVis and/or PowerFit software: [http://pastebin.com/5ueQRerr CESNET-create-VM.sh] and [http://pastebin.com/ARxadbn9 IISAS-create-VM.sh]<br />
* The scripts use the following user_data to contextualise the VMs in order to be ready to install the needed software: [http://pastebin.com/f48qMmsN user_data_ubuntu] and [http://pastebin.com/xCdhkTZr user_data_centos7]. Customise them by inserting your public ssh-key. <br />
*After having run the scripts above (voms-proxy-init -voms enmr.eu -r is required before executing them, from a host with an [https://wiki.egi.eu/wiki/Fedcloud-tf:CLI_Environment occi client] installed), you have to ssh in the VM with your ssh-key, then sudo su -, and then execute ./install-gpu-driver.sh. After that the VM is ready to install the application software,by executing ./install-disvis.sh and/or ./install-powerfit.sh. Scripts for running test samples are available in /home/run-disvisGPU.sh and /home/run-powerfitGPU.sh.</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Competence_centre_MoBrain&diff=89733
Competence centre MoBrain
2016-09-19T13:30:17Z
<p>Verlato: /* How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016) */</p>
<hr />
<div>{{Template:EGI-Engage_CC}} <br />
<br />
<br />
{{TOC_right}} <br />
<br />
[[Image:Mobrain.png|300px|right]]<br />
<br />
<br />
'''MoBrain: A Competence Center to Serve Translational Research from Molecule to Brain'''<br />
<br />
<br> <br />
<br> <br />
<br />
'''CC Coordinator:''' Alexandre M.J.J. Bonvin<br />
<br />
'''CC Coordinator deputy:''' Antonio Rosato<br />
<br />
'''CC members' list:''' cc-mobrain AT mailman.egi.eu <br />
<br />
'''CC meetings:''' https://indico.egi.eu/indico/categoryDisplay.py?categId=145 <br />
<br />
<br> <br />
<br />
<br />
== Introduction ==<br />
<br />
Today’s translational research era calls for innovative solutions to enable researchers and clinicians to build bridges between the microscopic (molecular) and macroscopic (human) scales. This requires building both transversal and vertical connections between techniques, researchers and clinicians to provide them with an optimal e-Science toolbox to tackle societal challenges related to health. <br />
<br />
The main objective of the MoBrain Competence Center (CC) is to lower barriers for scientists to access modern e-Science solutions from micro to macro scales. MoBrain builds on grid- and cloud-based infrastructures and on the existing expertise available within WeNMR (www.wenmr.eu), N4U (neugrid4you.eu), and technology providers (NGIs and other institutions, OSG). This initiative aims to serve its user communities, related ESFRI projects (e.g. INSTRUCT) and in the long term the Human Brain Project (FET Flagship), and strengthen the EGI services offering. <br />
<br />
By integrating molecular structural biology and medical imaging services and data, MoBrain will kick-start the development of a larger, integrated, global science virtual research environment for life and brain scientists worldwide. The mini-projects defined in MoBrain are geared toward facilitating this overall objective, each with specific objectives to reinforce existing services, develop new solutions and pave the path to global competence center and virtual research environment for translational research from molecular to brain. <br />
<br />
There are already many services and support/training mechanisms in place that will be further developed, optimized and merged during the operation of the CC, building onto and contributing to the EGI service offering. MoBrain will produce a working environment that will be better tailored to the end user needs than any of its individual components. It will provide an extended portfolio of tools and data in a user-friendly e-laboratory, with a direct relevance for neuroscience, starting from the quantification of the molecular forces, protein folding, biomolecular interactions, drug design and treatments, improved diagnostic and the full characterization of every pathological mechanism of brain diseases through both phenomenological as well as mechanistic approaches. <br />
<br />
<br />
<br />
== MoBrain partners ==<br />
<br />
*Utrecht University, Bijvoet Center for Biomolecular Research, the Netherlands <br />
*Consorzio Interuniversitario Risonanze Magnetiche Di Metalloproteine Paramagnetiche, Florence University, Italy. <br />
*Consejo Superior de Investigaciones Cientificas (and Spanish NGI) <br />
*Science and Technology Facility Council, UK <br />
*Provincia Lombardo Veneta Ordine Ospedaliero di SanGiovanni di Dio – Fatebenefratelli, Italy <br />
*GNUBILA, France <br />
*Istituto Nazionale Di Fisica Nucleare, Italy <br />
*SURFsara (Dutch NGI) <br />
*CESNET (Czech NGI) <br />
*Open Science Grid (US)<br />
<br />
The CC is open for additional members. Please email the CC coordinator to join. <br />
<br />
<br />
== Tasks ==<br />
<br />
<br />
=== T1: Cryo-EM in the cloud: bringing clouds to the data ===<br />
<br />
=== T2: GPU portals for biomolecular simulations ===<br />
<br />
=== T3: Integrating the micro- (WeNMR/INSTRUCT) and macroscopic (NeuGRID4you) virtual research communities ===<br />
<br />
=== T4: User support and training ===<br />
<br />
== Deliverables ==<br />
<br />
* D6.4 Fully integrated MoBrain web portal (OTHER), M12 (02.2016) - by T3<br />
* D6.7 Implementation and evaluation of AMBER and/or GROMACS (R), M13 (03.2016) - by T2<br />
* D6.12 GPGPU-enabled web portal(s) for MoBrain (OTHER), M16 (06.2016) - by T2&3<br />
* D6.14 Scipion cloud deployment for MoBrain (OTHER), M21 (11.2016) - by T1<br />
<br />
<br />
[[Category:EGI-Engage]]<br />
<br />
== Technical documentations ==<br />
<br />
=== How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016)===<br />
<br />
*A docker image with opencl + nvidia drivers + DisVis application and one with PowerFit application ready to run on GPU servers have been prepared for Ubuntu, with the goals of checking performance described [https://drive.google.com/file/d/0B44fOmGpGehGWVVTXzZmU3F1WlE/view here]. <br />
*In collaboration with [https://wiki.egi.eu/wiki/EGI-Engage:TASK_JRA2.4_Accelerated_Computing EGI-Engage task JRA2.4 (Accelerated Computing)] the DisVis and PowerFit docker images have been then re-build to work on the SL6 GPU servers which had a NVIDIA driver 319.x supporting only CUDA 5.5 version. These servers form the grid-enabled cluster at CIRMMP, and successful grid submissions to this cluster using the enmr.eu VO has been carried out with the expected performance the 1st of December.<br />
*'''The 14th of July the CIRMMP servers were updated with the latest NVIDIA driver 352.93 supporting now CUDA 7.5 version. DisVis and PowerFit images used are now the ones maintained by the INDIGO-DataCloud project and distributed via dockerhub at [https://github.com/indigo-dc/ansible-role-disvis-powerfit indigodatacloudapps repository].''' <br />
*Here follows a description of the scripts and commands used to run the test:<br />
<pre>$ voms-proxy-init --voms enmr.eu <br />
$ glite-ce-job-submit -o jobid.txt -a -r cegpu.cerm.unifi.it:8443/cream-pbs-batch disvis.jdl</pre> <br />
where disvis.jdl is:&nbsp; <br />
<br />
[<br />
executable = "disvis.sh";<br />
inputSandbox = { "disvis.sh" ,"O14250.pdb" , "Q9UT97.pdb" , "restraints.dat" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=2;<br />
]<br />
<br />
<br />
and disvis.sh is (assuming docker engine is installed on the grid WNs): <br />
<pre>#!/bin/sh<br />
echo hostname=$(hostname) <br />
echo user=$(id) <br />
export WDIR=`pwd` <br />
mkdir res-gpu<br />
echo docker run disvis... <br />
echo starttime=$(date)<br />
rnd=$RANDOM <br />
docker run --name=disvis-$rnd --device=/dev/nvidia0:/dev/nvidia0 --device=/dev/nvidia1:/dev/nvidia1 \<br />
--device=/dev/nvidiactl:/dev/nvidiactl --device=/dev/nvidia-uvm:/dev/nvidia-uvm \<br />
-v $WDIR:/home indigodatacloudapps/disvis:nvdrv_352.93 /bin/sh \<br />
-c 'disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/res-gpu' <br />
docker rm disvis-$rnd<br />
echo endtime=$(date) <br />
nvidia-smi --query-accounted-apps=pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv<br />
tar cfz res-gpu.tgz res-gpu</pre> <br />
<br />
'''Updated the 19th of September 2016''':<br />
If docker engine is not available on the grid WNs, you can use udocker in disvis.sh:<br />
<pre>#!/bin/sh<br />
echo hostname=$(hostname)<br />
echo user=$(id)<br />
export WDIR=`pwd`<br />
echo udocker run disvis...<br />
echo starttime=$(date)<br />
git clone https://github.com/indigo-dc/udocker<br />
cd udocker<br />
./udocker.py pull indigodatacloudapps/disvis:nvdrv_352.93<br />
echo time after pull = $(date)<br />
rnd=$RANDOM<br />
./udocker.py create --name=disvis-$rnd indigodatacloudapps/disvis:nvdrv_352.93<br />
echo time after udocker create = $(date)<br />
mkdir $WDIR/out<br />
./udocker.py run -v /dev --volume=$WDIR:/home disvis-$rnd disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 9.27 -vs 2 -d /home/out<br />
echo time after udocker run = $(date)<br />
./udocker.py rm disvis-$rnd<br />
./udocker.py rmi indigodatacloudapps/disvis:nvdrv_352.93<br />
echo time after udocker rm and rmi = $(date)<br />
nvidia-smi --query-accounted-apps=timestamp,pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv<br />
cd $WDIR<br />
tar zcvf res-gpu.tgz out/<br />
echo endtime=$(date)</pre> <br />
<br />
The example input files specified in the jdl script above were taken from the GitHub repo for disvis available from: [https://github.com/haddocking/disvis https://github.com/haddocking/disvis] <br />
<br />
The performance on the GPGPU grid resources is what is expected for the card type. <br />
<br />
The timings were compared to an in-house GPU node at Utrecht University (GTX680 card): <br />
<pre>GPGPU-type Timing[minutes]<br />
GTX680 19<br />
M2090 (VM) 15.5<br />
1xK20 (VM) 13.5<br />
2xK20 11<br />
1xK20 11</pre> <br />
This indicates that DisVis, as expected, is currently incapable of using both the available GPGPUs. However, we plan to use this testbed to parallelize it on mulitple GPUs, which should be relatively straightforward. Values marked with (VM) refers to GPGPUs hosted in the FedCloud (see next section).<br />
<br />
For running PowerFit just replace disvis.jdl with a powerfit.jdl like e.g.:&nbsp;<br />
<br />
[<br />
executable = "powerfit.sh";<br />
inputSandbox = { "powerfit.sh" ,"1046.map" , "GroES_1gru.pdb" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=2;<br />
]<br />
<br />
with powerfit.sh as [http://pastebin.com/uugj8ZQv here] and input data taken from [http://www.lip.pt/~david/powerfit-example.tgz here] (courtesy of Mario David)<br />
<br />
=== How to run the DisVis and PowerFit on VMs of the Federated Cloud using the enmr.eu VO (updated 14th of July 2016)===<br />
*Scripts are available to instantiate GPGPU-enabled VMs on IISAS (CentOS7 images with Tesla K20m GPUs) and CESNET (Ubuntu images with M2090 GPUs) FedCloud sites, installing latest NVIDIA drivers and DisVis and/or PowerFit software: [http://pastebin.com/5ueQRerr CESNET-create-VM.sh] and [http://pastebin.com/ARxadbn9 IISAS-create-VM.sh]<br />
* The scripts use the following user_data to contextualise the VMs in order to be ready to install the needed software: [http://pastebin.com/f48qMmsN user_data_ubuntu] and [http://pastebin.com/xCdhkTZr user_data_centos7]. Customise them by inserting your public ssh-key. <br />
*After having run the scripts above (voms-proxy-init -voms enmr.eu -r is required before executing them, from a host with an [https://wiki.egi.eu/wiki/Fedcloud-tf:CLI_Environment occi client] installed), you have to ssh in the VM with your ssh-key, then sudo su -, and then execute ./install-gpu-driver.sh. After that the VM is ready to install the application software,by executing ./install-disvis.sh and/or ./install-powerfit.sh. Scripts for running test samples are available in /home/run-disvisGPU.sh and /home/run-powerfitGPU.sh.</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Competence_centre_MoBrain&diff=89533
Competence centre MoBrain
2016-09-14T13:13:48Z
<p>Verlato: /* How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016) */</p>
<hr />
<div>{{Template:EGI-Engage_CC}} <br />
<br />
<br />
{{TOC_right}} <br />
<br />
[[Image:Mobrain.png|300px|right]]<br />
<br />
<br />
'''MoBrain: A Competence Center to Serve Translational Research from Molecule to Brain'''<br />
<br />
<br> <br />
<br> <br />
<br />
'''CC Coordinator:''' Alexandre M.J.J. Bonvin<br />
<br />
'''CC Coordinator deputy:''' Antonio Rosato<br />
<br />
'''CC members' list:''' cc-mobrain AT mailman.egi.eu <br />
<br />
'''CC meetings:''' https://indico.egi.eu/indico/categoryDisplay.py?categId=145 <br />
<br />
<br> <br />
<br />
<br />
== Introduction ==<br />
<br />
Today’s translational research era calls for innovative solutions to enable researchers and clinicians to build bridges between the microscopic (molecular) and macroscopic (human) scales. This requires building both transversal and vertical connections between techniques, researchers and clinicians to provide them with an optimal e-Science toolbox to tackle societal challenges related to health. <br />
<br />
The main objective of the MoBrain Competence Center (CC) is to lower barriers for scientists to access modern e-Science solutions from micro to macro scales. MoBrain builds on grid- and cloud-based infrastructures and on the existing expertise available within WeNMR (www.wenmr.eu), N4U (neugrid4you.eu), and technology providers (NGIs and other institutions, OSG). This initiative aims to serve its user communities, related ESFRI projects (e.g. INSTRUCT) and in the long term the Human Brain Project (FET Flagship), and strengthen the EGI services offering. <br />
<br />
By integrating molecular structural biology and medical imaging services and data, MoBrain will kick-start the development of a larger, integrated, global science virtual research environment for life and brain scientists worldwide. The mini-projects defined in MoBrain are geared toward facilitating this overall objective, each with specific objectives to reinforce existing services, develop new solutions and pave the path to global competence center and virtual research environment for translational research from molecular to brain. <br />
<br />
There are already many services and support/training mechanisms in place that will be further developed, optimized and merged during the operation of the CC, building onto and contributing to the EGI service offering. MoBrain will produce a working environment that will be better tailored to the end user needs than any of its individual components. It will provide an extended portfolio of tools and data in a user-friendly e-laboratory, with a direct relevance for neuroscience, starting from the quantification of the molecular forces, protein folding, biomolecular interactions, drug design and treatments, improved diagnostic and the full characterization of every pathological mechanism of brain diseases through both phenomenological as well as mechanistic approaches. <br />
<br />
<br />
<br />
== MoBrain partners ==<br />
<br />
*Utrecht University, Bijvoet Center for Biomolecular Research, the Netherlands <br />
*Consorzio Interuniversitario Risonanze Magnetiche Di Metalloproteine Paramagnetiche, Florence University, Italy. <br />
*Consejo Superior de Investigaciones Cientificas (and Spanish NGI) <br />
*Science and Technology Facility Council, UK <br />
*Provincia Lombardo Veneta Ordine Ospedaliero di SanGiovanni di Dio – Fatebenefratelli, Italy <br />
*GNUBILA, France <br />
*Istituto Nazionale Di Fisica Nucleare, Italy <br />
*SURFsara (Dutch NGI) <br />
*CESNET (Czech NGI) <br />
*Open Science Grid (US)<br />
<br />
The CC is open for additional members. Please email the CC coordinator to join. <br />
<br />
<br />
== Tasks ==<br />
<br />
<br />
=== T1: Cryo-EM in the cloud: bringing clouds to the data ===<br />
<br />
=== T2: GPU portals for biomolecular simulations ===<br />
<br />
=== T3: Integrating the micro- (WeNMR/INSTRUCT) and macroscopic (NeuGRID4you) virtual research communities ===<br />
<br />
=== T4: User support and training ===<br />
<br />
== Deliverables ==<br />
<br />
* D6.4 Fully integrated MoBrain web portal (OTHER), M12 (02.2016) - by T3<br />
* D6.7 Implementation and evaluation of AMBER and/or GROMACS (R), M13 (03.2016) - by T2<br />
* D6.12 GPGPU-enabled web portal(s) for MoBrain (OTHER), M16 (06.2016) - by T2&3<br />
* D6.14 Scipion cloud deployment for MoBrain (OTHER), M21 (11.2016) - by T1<br />
<br />
<br />
[[Category:EGI-Engage]]<br />
<br />
== Technical documentations ==<br />
<br />
=== How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016)===<br />
<br />
*A docker image with opencl + nvidia drivers + DisVis application and one with PowerFit application ready to run on GPU servers have been prepared for Ubuntu, with the goals of checking performance described [https://drive.google.com/file/d/0B44fOmGpGehGWVVTXzZmU3F1WlE/view here]. <br />
*In collaboration with [https://wiki.egi.eu/wiki/EGI-Engage:TASK_JRA2.4_Accelerated_Computing EGI-Engage task JRA2.4 (Accelerated Computing)] the DisVis and PowerFit docker images have been then re-build to work on the SL6 GPU servers which had a NVIDIA driver 319.x supporting only CUDA 5.5 version. These servers form the grid-enabled cluster at CIRMMP, and successful grid submissions to this cluster using the enmr.eu VO has been carried out with the expected performance the 1st of December.<br />
*'''The 14th of July the CIRMMP servers were updated with the latest NVIDIA driver 352.93 supporting now CUDA 7.5 version. DisVis and PowerFit images used are now the ones maintained by the INDIGO-DataCloud project and distributed via dockerhub at [https://github.com/indigo-dc/ansible-role-disvis-powerfit indigodatacloudapps repository].''' <br />
*Here follows a description of the scripts and commands used to run the test:<br />
<pre>$ voms-proxy-init --voms enmr.eu <br />
$ glite-ce-job-submit -o jobid.txt -a -r cegpu.cerm.unifi.it:8443/cream-pbs-batch disvis.jdl</pre> <br />
where disvis.jdl is:&nbsp; <br />
<br />
[<br />
executable = "disvis.sh";<br />
inputSandbox = { "disvis.sh" ,"O14250.pdb" , "Q9UT97.pdb" , "restraints.dat" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=2;<br />
]<br />
<br />
<br />
and disvis.sh is: <br />
<pre>#!/bin/sh<br />
echo hostname=$(hostname) <br />
echo user=$(id) <br />
export WDIR=`pwd` <br />
mkdir res-gpu<br />
echo docker run disvis... <br />
echo starttime=$(date)<br />
rnd=$RANDOM <br />
docker run --name=disvis-$rnd --device=/dev/nvidia0:/dev/nvidia0 --device=/dev/nvidia1:/dev/nvidia1 \<br />
--device=/dev/nvidiactl:/dev/nvidiactl --device=/dev/nvidia-uvm:/dev/nvidia-uvm \<br />
-v $WDIR:/home indigodatacloudapps/disvis:nvdrv_352.93 /bin/sh \<br />
-c 'disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/res-gpu' <br />
docker rm disvis-$rnd<br />
echo endtime=$(date) <br />
nvidia-smi --query-accounted-apps=pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv<br />
tar cfz res-gpu.tgz res-gpu</pre> <br />
The example input files specified in the jdl script above were taken from the GitHub repo for disvis available from: [https://github.com/haddocking/disvis https://github.com/haddocking/disvis] <br />
<br />
The performance on the GPGPU grid resources is what is expected for the card type. <br />
<br />
The timings were compared to an in-house GPU node at Utrecht University (GTX680 card): <br />
<pre>GPGPU-type Timing[minutes]<br />
GTX680 19<br />
M2090 (VM) 15.5<br />
1xK20 (VM) 13.5<br />
2xK20 11<br />
1xK20 11</pre> <br />
This indicates that DisVis, as expected, is currently incapable of using both the available GPGPUs. However, we plan to use this testbed to parallelize it on mulitple GPUs, which should be relatively straightforward. Values marked with (VM) refers to GPGPUs hosted in the FedCloud (see next section).<br />
<br />
For running PowerFit just replace disvis.jdl with a powerfit.jdl like e.g.:&nbsp;<br />
<br />
[<br />
executable = "powerfit.sh";<br />
inputSandbox = { "powerfit.sh" ,"1046.map" , "GroES_1gru.pdb" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=2;<br />
]<br />
<br />
with powerfit.sh as [http://pastebin.com/uugj8ZQv here] and input data taken from [http://www.lip.pt/~david/powerfit-example.tgz here] (courtesy of Mario David)<br />
<br />
=== How to run the DisVis and PowerFit on VMs of the Federated Cloud using the enmr.eu VO (updated 14th of July 2016)===<br />
*Scripts are available to instantiate GPGPU-enabled VMs on IISAS (CentOS7 images with Tesla K20m GPUs) and CESNET (Ubuntu images with M2090 GPUs) FedCloud sites, installing latest NVIDIA drivers and DisVis and/or PowerFit software: [http://pastebin.com/5ueQRerr CESNET-create-VM.sh] and [http://pastebin.com/ARxadbn9 IISAS-create-VM.sh]<br />
* The scripts use the following user_data to contextualise the VMs in order to be ready to install the needed software: [http://pastebin.com/f48qMmsN user_data_ubuntu] and [http://pastebin.com/xCdhkTZr user_data_centos7]. Customise them by inserting your public ssh-key. <br />
*After having run the scripts above (voms-proxy-init -voms enmr.eu -r is required before executing them, from a host with an [https://wiki.egi.eu/wiki/Fedcloud-tf:CLI_Environment occi client] installed), you have to ssh in the VM with your ssh-key, then sudo su -, and then execute ./install-gpu-driver.sh. After that the VM is ready to install the application software,by executing ./install-disvis.sh and/or ./install-powerfit.sh. Scripts for running test samples are available in /home/run-disvisGPU.sh and /home/run-powerfitGPU.sh.</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Competence_centre_MoBrain&diff=89532
Competence centre MoBrain
2016-09-14T12:48:35Z
<p>Verlato: </p>
<hr />
<div>{{Template:EGI-Engage_CC}} <br />
<br />
<br />
{{TOC_right}} <br />
<br />
[[Image:Mobrain.png|300px|right]]<br />
<br />
<br />
'''MoBrain: A Competence Center to Serve Translational Research from Molecule to Brain'''<br />
<br />
<br> <br />
<br> <br />
<br />
'''CC Coordinator:''' Alexandre M.J.J. Bonvin<br />
<br />
'''CC Coordinator deputy:''' Antonio Rosato<br />
<br />
'''CC members' list:''' cc-mobrain AT mailman.egi.eu <br />
<br />
'''CC meetings:''' https://indico.egi.eu/indico/categoryDisplay.py?categId=145 <br />
<br />
<br> <br />
<br />
<br />
== Introduction ==<br />
<br />
Today’s translational research era calls for innovative solutions to enable researchers and clinicians to build bridges between the microscopic (molecular) and macroscopic (human) scales. This requires building both transversal and vertical connections between techniques, researchers and clinicians to provide them with an optimal e-Science toolbox to tackle societal challenges related to health. <br />
<br />
The main objective of the MoBrain Competence Center (CC) is to lower barriers for scientists to access modern e-Science solutions from micro to macro scales. MoBrain builds on grid- and cloud-based infrastructures and on the existing expertise available within WeNMR (www.wenmr.eu), N4U (neugrid4you.eu), and technology providers (NGIs and other institutions, OSG). This initiative aims to serve its user communities, related ESFRI projects (e.g. INSTRUCT) and in the long term the Human Brain Project (FET Flagship), and strengthen the EGI services offering. <br />
<br />
By integrating molecular structural biology and medical imaging services and data, MoBrain will kick-start the development of a larger, integrated, global science virtual research environment for life and brain scientists worldwide. The mini-projects defined in MoBrain are geared toward facilitating this overall objective, each with specific objectives to reinforce existing services, develop new solutions and pave the path to global competence center and virtual research environment for translational research from molecular to brain. <br />
<br />
There are already many services and support/training mechanisms in place that will be further developed, optimized and merged during the operation of the CC, building onto and contributing to the EGI service offering. MoBrain will produce a working environment that will be better tailored to the end user needs than any of its individual components. It will provide an extended portfolio of tools and data in a user-friendly e-laboratory, with a direct relevance for neuroscience, starting from the quantification of the molecular forces, protein folding, biomolecular interactions, drug design and treatments, improved diagnostic and the full characterization of every pathological mechanism of brain diseases through both phenomenological as well as mechanistic approaches. <br />
<br />
<br />
<br />
== MoBrain partners ==<br />
<br />
*Utrecht University, Bijvoet Center for Biomolecular Research, the Netherlands <br />
*Consorzio Interuniversitario Risonanze Magnetiche Di Metalloproteine Paramagnetiche, Florence University, Italy. <br />
*Consejo Superior de Investigaciones Cientificas (and Spanish NGI) <br />
*Science and Technology Facility Council, UK <br />
*Provincia Lombardo Veneta Ordine Ospedaliero di SanGiovanni di Dio – Fatebenefratelli, Italy <br />
*GNUBILA, France <br />
*Istituto Nazionale Di Fisica Nucleare, Italy <br />
*SURFsara (Dutch NGI) <br />
*CESNET (Czech NGI) <br />
*Open Science Grid (US)<br />
<br />
The CC is open for additional members. Please email the CC coordinator to join. <br />
<br />
<br />
== Tasks ==<br />
<br />
<br />
=== T1: Cryo-EM in the cloud: bringing clouds to the data ===<br />
<br />
=== T2: GPU portals for biomolecular simulations ===<br />
<br />
=== T3: Integrating the micro- (WeNMR/INSTRUCT) and macroscopic (NeuGRID4you) virtual research communities ===<br />
<br />
=== T4: User support and training ===<br />
<br />
== Deliverables ==<br />
<br />
* D6.4 Fully integrated MoBrain web portal (OTHER), M12 (02.2016) - by T3<br />
* D6.7 Implementation and evaluation of AMBER and/or GROMACS (R), M13 (03.2016) - by T2<br />
* D6.12 GPGPU-enabled web portal(s) for MoBrain (OTHER), M16 (06.2016) - by T2&3<br />
* D6.14 Scipion cloud deployment for MoBrain (OTHER), M21 (11.2016) - by T1<br />
<br />
<br />
[[Category:EGI-Engage]]<br />
<br />
== Technical documentations ==<br />
<br />
=== How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016)===<br />
<br />
*A docker image with opencl + nvidia drivers + DisVis application and one with PowerFit application ready to run on GPU servers have been prepared for Ubuntu, with the goals of checking performance described [https://drive.google.com/file/d/0B44fOmGpGehGWVVTXzZmU3F1WlE/view here]. <br />
*In collaboration with [https://wiki.egi.eu/wiki/EGI-Engage:TASK_JRA2.4_Accelerated_Computing EGI-Engage task JRA2.4 (Accelerated Computing)] the DisVis and PowerFit docker images have been then re-build to work on the SL6 GPU servers which had a NVIDIA driver 319.x supporting only CUDA 5.5 version. These servers form the grid-enabled cluster at CIRMMP, and successful grid submissions to this cluster using the enmr.eu VO has been carried out with the expected performance the 1st of December.<br />
*'''The 14th of July the CIRMMP servers were updated with the latest NVIDIA driver 352.93 supporting now CUDA 7.5 version. DisVis and PowerFit images used are now the ones maintained by the INDIGO-DataCloud project and distributed via dockerhub at [https://github.com/indigo-dc/ansible-role-disvis-powerfit indigodatacloudapps repository].''' <br />
*Here follows a description of the scripts and commands used to run the test:<br />
<pre>$ voms-proxy-init --voms enmr.eu <br />
$ glite-ce-job-submit -o jobid.txt -a -r cegpu.cerm.unifi.it:8443/cream-pbs-batch disvis.jdl</pre> <br />
where disvis.jdl is:&nbsp; <br />
<br />
[<br />
executable = "disvis.sh";<br />
inputSandbox = { "disvis.sh" ,"O14250.pdb" , "Q9UT97.pdb" , "restraints.dat" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=2;<br />
]<br />
<br />
<br />
and disvis.sh is: <br />
<pre>#!/bin/sh<br />
echo hostname=$(hostname) <br />
echo user=$(id) <br />
export WDIR=`pwd` <br />
mkdir res-gpu<br />
echo docker run disvis... <br />
echo starttime=$(date)<br />
rnd=$RANDOM <br />
docker run --name=disvis-$rnd --device=/dev/nvidia0:/dev/nvidia0 --device=/dev/nvidia1:/dev/nvidia1 \<br />
--device=/dev/nvidiactl:/dev/nvidiactl --device=/dev/nvidia-uvm:/dev/nvidia-uvm \<br />
-v $WDIR:/home indigodatacloudapps/disvis /bin/sh \<br />
-c 'disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/res-gpu' <br />
docker rm disvis-$rnd<br />
echo endtime=$(date) <br />
nvidia-smi --query-accounted-apps=pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv<br />
tar cfz res-gpu.tgz res-gpu</pre> <br />
The example input files specified in the jdl script above were taken from the GitHub repo for disvis available from: [https://github.com/haddocking/disvis https://github.com/haddocking/disvis] <br />
<br />
The performance on the GPGPU grid resources is what is expected for the card type. <br />
<br />
The timings were compared to an in-house GPU node at Utrecht University (GTX680 card): <br />
<pre>GPGPU-type Timing[minutes]<br />
GTX680 19<br />
M2090 (VM) 15.5<br />
1xK20 (VM) 13.5<br />
2xK20 11<br />
1xK20 11</pre> <br />
This indicates that DisVis, as expected, is currently incapable of using both the available GPGPUs. However, we plan to use this testbed to parallelize it on mulitple GPUs, which should be relatively straightforward. Values marked with (VM) refers to GPGPUs hosted in the FedCloud (see next section).<br />
<br />
For running PowerFit just replace disvis.jdl with a powerfit.jdl like e.g.:&nbsp;<br />
<br />
[<br />
executable = "powerfit.sh";<br />
inputSandbox = { "powerfit.sh" ,"1046.map" , "GroES_1gru.pdb" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=2;<br />
]<br />
<br />
with powerfit.sh as [http://pastebin.com/uugj8ZQv here] and input data taken from [http://www.lip.pt/~david/powerfit-example.tgz here] (courtesy of Mario David)<br />
<br />
=== How to run the DisVis and PowerFit on VMs of the Federated Cloud using the enmr.eu VO (updated 14th of July 2016)===<br />
*Scripts are available to instantiate GPGPU-enabled VMs on IISAS (CentOS7 images with Tesla K20m GPUs) and CESNET (Ubuntu images with M2090 GPUs) FedCloud sites, installing latest NVIDIA drivers and DisVis and/or PowerFit software: [http://pastebin.com/5ueQRerr CESNET-create-VM.sh] and [http://pastebin.com/ARxadbn9 IISAS-create-VM.sh]<br />
* The scripts use the following user_data to contextualise the VMs in order to be ready to install the needed software: [http://pastebin.com/f48qMmsN user_data_ubuntu] and [http://pastebin.com/xCdhkTZr user_data_centos7]. Customise them by inserting your public ssh-key. <br />
*After having run the scripts above (voms-proxy-init -voms enmr.eu -r is required before executing them, from a host with an [https://wiki.egi.eu/wiki/Fedcloud-tf:CLI_Environment occi client] installed), you have to ssh in the VM with your ssh-key, then sudo su -, and then execute ./install-gpu-driver.sh. After that the VM is ready to install the application software,by executing ./install-disvis.sh and/or ./install-powerfit.sh. Scripts for running test samples are available in /home/run-disvisGPU.sh and /home/run-powerfitGPU.sh.</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Competence_centre_MoBrain&diff=89492
Competence centre MoBrain
2016-09-13T12:57:54Z
<p>Verlato: </p>
<hr />
<div>{{Template:EGI-Engage_CC}} <br />
<br />
<br />
{{TOC_right}} <br />
<br />
[[Image:Mobrain.png|300px|right]]<br />
<br />
<br />
'''MoBrain: A Competence Center to Serve Translational Research from Molecule to Brain'''<br />
<br />
<br> <br />
<br> <br />
<br />
'''CC Coordinator:''' Alexandre M.J.J. Bonvin<br />
<br />
'''CC Coordinator deputy:''' Antonio Rosato<br />
<br />
'''CC members' list:''' cc-mobrain AT mailman.egi.eu <br />
<br />
'''CC meetings:''' https://indico.egi.eu/indico/categoryDisplay.py?categId=145 <br />
<br />
<br> <br />
<br />
<br />
== Introduction ==<br />
<br />
Today’s translational research era calls for innovative solutions to enable researchers and clinicians to build bridges between the microscopic (molecular) and macroscopic (human) scales. This requires building both transversal and vertical connections between techniques, researchers and clinicians to provide them with an optimal e-Science toolbox to tackle societal challenges related to health. <br />
<br />
The main objective of the MoBrain Competence Center (CC) is to lower barriers for scientists to access modern e-Science solutions from micro to macro scales. MoBrain builds on grid- and cloud-based infrastructures and on the existing expertise available within WeNMR (www.wenmr.eu), N4U (neugrid4you.eu), and technology providers (NGIs and other institutions, OSG). This initiative aims to serve its user communities, related ESFRI projects (e.g. INSTRUCT) and in the long term the Human Brain Project (FET Flagship), and strengthen the EGI services offering. <br />
<br />
By integrating molecular structural biology and medical imaging services and data, MoBrain will kick-start the development of a larger, integrated, global science virtual research environment for life and brain scientists worldwide. The mini-projects defined in MoBrain are geared toward facilitating this overall objective, each with specific objectives to reinforce existing services, develop new solutions and pave the path to global competence center and virtual research environment for translational research from molecular to brain. <br />
<br />
There are already many services and support/training mechanisms in place that will be further developed, optimized and merged during the operation of the CC, building onto and contributing to the EGI service offering. MoBrain will produce a working environment that will be better tailored to the end user needs than any of its individual components. It will provide an extended portfolio of tools and data in a user-friendly e-laboratory, with a direct relevance for neuroscience, starting from the quantification of the molecular forces, protein folding, biomolecular interactions, drug design and treatments, improved diagnostic and the full characterization of every pathological mechanism of brain diseases through both phenomenological as well as mechanistic approaches. <br />
<br />
<br />
<br />
== MoBrain partners ==<br />
<br />
*Utrecht University, Bijvoet Center for Biomolecular Research, the Netherlands <br />
*Consorzio Interuniversitario Risonanze Magnetiche Di Metalloproteine Paramagnetiche, Florence University, Italy. <br />
*Consejo Superior de Investigaciones Cientificas (and Spanish NGI) <br />
*Science and Technology Facility Council, UK <br />
*Provincia Lombardo Veneta Ordine Ospedaliero di SanGiovanni di Dio – Fatebenefratelli, Italy <br />
*GNUBILA, France <br />
*Istituto Nazionale Di Fisica Nucleare, Italy <br />
*SURFsara (Dutch NGI) <br />
*CESNET (Czech NGI) <br />
*Open Science Grid (US)<br />
<br />
The CC is open for additional members. Please email the CC coordinator to join. <br />
<br />
<br />
== Tasks ==<br />
<br />
<br />
=== T1: Cryo-EM in the cloud: bringing clouds to the data ===<br />
<br />
=== T2: GPU portals for biomolecular simulations ===<br />
<br />
=== T3: Integrating the micro- (WeNMR/INSTRUCT) and macroscopic (NeuGRID4you) virtual research communities ===<br />
<br />
=== T4: User support and training ===<br />
<br />
== Deliverables ==<br />
<br />
* D6.4 Fully integrated MoBrain web portal (OTHER), M12 (02.2016) - by T3<br />
* D6.7 Implementation and evaluation of AMBER and/or GROMACS (R), M13 (03.2016) - by T2<br />
* D6.12 GPGPU-enabled web portal(s) for MoBrain (OTHER), M16 (06.2016) - by T2&3<br />
* D6.14 Scipion cloud deployment for MoBrain (OTHER), M21 (11.2016) - by T1<br />
<br />
<br />
[[Category:EGI-Engage]]<br />
<br />
== Technical documentations ==<br />
<br />
=== How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016)===<br />
<br />
*A docker image with opencl + nvidia drivers + DisVis application and one with PowerFit application ready to run on GPU servers have been prepared for Ubuntu, with the goals of checking performance described [https://drive.google.com/file/d/0B44fOmGpGehGWVVTXzZmU3F1WlE/view here]. <br />
*In collaboration with [https://wiki.egi.eu/wiki/EGI-Engage:TASK_JRA2.4_Accelerated_Computing EGI-Engage task JRA2.4 (Accelerated Computing)] the DisVis and PowerFit docker images have been then re-build to work on the SL6 GPU servers which had a NVIDIA driver 319.x supporting only CUDA 5.5 version. These servers form the grid-enabled cluster at CIRMMP, and successful grid submissions to this cluster using the enmr.eu VO has been carried out with the expected performance the 1st of December.<br />
*'''The 14th of July the CIRMMP servers were updated with the latest NVIDIA driver 352.93 supporting now CUDA 7.5 version. DisVis and PowerFit images used are now the ones maintained by the INDIGO-DataCloud project and distributed via dockerhub at [https://github.com/indigo-dc/ansible-role-disvis-powerfit indigodatacloudapps repository].''' <br />
*Here follows a description of the scripts and commands used to run the test:<br />
<pre>$ voms-proxy-init --voms enmr.eu <br />
$ glite-ce-job-submit -o jobid.txt -a -r cegpu.cerm.unifi.it:8443/cream-pbs-batch disvis.jdl</pre> <br />
where disvis.jdl is:&nbsp; <br />
<br />
[<br />
executable = "disvis.sh";<br />
inputSandbox = { "disvis.sh" ,"O14250.pdb" , "Q9UT97.pdb" , "restraints.dat" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=2;<br />
]<br />
<br />
<br />
and disvis.sh is: <br />
<pre>#!/bin/sh<br />
echo hostname=$(hostname) <br />
echo user=$(id) <br />
export WDIR=`pwd` <br />
mkdir res-gpu<br />
echo docker run disvis... <br />
echo starttime=$(date)<br />
id=$(date +%s) <br />
docker run --name=disvis-$id --device=/dev/nvidia0:/dev/nvidia0 --device=/dev/nvidia1:/dev/nvidia1 \<br />
--device=/dev/nvidiactl:/dev/nvidiactl --device=/dev/nvidia-uvm:/dev/nvidia-uvm \<br />
-v $WDIR:/home indigodatacloudapps/disvis /bin/sh \<br />
-c 'disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/res-gpu' <br />
docker rm disvis-$id<br />
echo endtime=$(date) <br />
nvidia-smi --query-accounted-apps=pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv<br />
tar cfz res-gpu.tgz res-gpu</pre> <br />
The example input files specified in the jdl script above were taken from the GitHub repo for disvis available from: [https://github.com/haddocking/disvis https://github.com/haddocking/disvis] <br />
<br />
The performance on the GPGPU grid resources is what is expected for the card type. <br />
<br />
The timings were compared to an in-house GPU node at Utrecht University (GTX680 card): <br />
<pre>GPGPU-type Timing[minutes]<br />
GTX680 19<br />
M2090 (VM) 15.5<br />
1xK20 (VM) 13.5<br />
2xK20 11<br />
1xK20 11</pre> <br />
This indicates that DisVis, as expected, is currently incapable of using both the available GPGPUs. However, we plan to use this testbed to parallelize it on mulitple GPUs, which should be relatively straightforward. Values marked with (VM) refers to GPGPUs hosted in the FedCloud (see next section).<br />
<br />
For running PowerFit just replace disvis.jdl with a powerfit.jdl like e.g.:&nbsp;<br />
<br />
[<br />
executable = "powerfit.sh";<br />
inputSandbox = { "powerfit.sh" ,"1046.map" , "GroES_1gru.pdb" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=2;<br />
]<br />
<br />
with powerfit.sh as [http://pastebin.com/uugj8ZQv here] and input data taken from [http://www.lip.pt/~david/powerfit-example.tgz here] (courtesy of Mario David)<br />
<br />
=== How to run the DisVis and PowerFit on VMs of the Federated Cloud using the enmr.eu VO (updated 14th of July 2016)===<br />
*Scripts are available to instantiate GPGPU-enabled VMs on IISAS (CentOS7 images with Tesla K20m GPUs) and CESNET (Ubuntu images with M2090 GPUs) FedCloud sites, installing latest NVIDIA drivers and DisVis and/or PowerFit software: [http://pastebin.com/5ueQRerr CESNET-create-VM.sh] and [http://pastebin.com/ARxadbn9 IISAS-create-VM.sh]<br />
* The scripts use the following user_data to contextualise the VMs in order to be ready to install the needed software: [http://pastebin.com/f48qMmsN user_data_ubuntu] and [http://pastebin.com/xCdhkTZr user_data_centos7]. Customise them by inserting your public ssh-key. <br />
*After having run the scripts above (voms-proxy-init -voms enmr.eu -r is required before executing them, from a host with an [https://wiki.egi.eu/wiki/Fedcloud-tf:CLI_Environment occi client] installed), you have to ssh in the VM with your ssh-key, then sudo su -, and then execute ./install-gpu-driver.sh. After that the VM is ready to install the application software,by executing ./install-disvis.sh and/or ./install-powerfit.sh. Scripts for running test samples are available in /home/run-disvisGPU.sh and /home/run-powerfitGPU.sh.</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Competence_centre_MoBrain&diff=89490
Competence centre MoBrain
2016-09-13T12:47:58Z
<p>Verlato: </p>
<hr />
<div>{{Template:EGI-Engage_CC}} <br />
<br />
<br />
{{TOC_right}} <br />
<br />
[[Image:Mobrain.png|300px|right]]<br />
<br />
<br />
'''MoBrain: A Competence Center to Serve Translational Research from Molecule to Brain'''<br />
<br />
<br> <br />
<br> <br />
<br />
'''CC Coordinator:''' Alexandre M.J.J. Bonvin<br />
<br />
'''CC Coordinator deputy:''' Antonio Rosato<br />
<br />
'''CC members' list:''' cc-mobrain AT mailman.egi.eu <br />
<br />
'''CC meetings:''' https://indico.egi.eu/indico/categoryDisplay.py?categId=145 <br />
<br />
<br> <br />
<br />
<br />
== Introduction ==<br />
<br />
Today’s translational research era calls for innovative solutions to enable researchers and clinicians to build bridges between the microscopic (molecular) and macroscopic (human) scales. This requires building both transversal and vertical connections between techniques, researchers and clinicians to provide them with an optimal e-Science toolbox to tackle societal challenges related to health. <br />
<br />
The main objective of the MoBrain Competence Center (CC) is to lower barriers for scientists to access modern e-Science solutions from micro to macro scales. MoBrain builds on grid- and cloud-based infrastructures and on the existing expertise available within WeNMR (www.wenmr.eu), N4U (neugrid4you.eu), and technology providers (NGIs and other institutions, OSG). This initiative aims to serve its user communities, related ESFRI projects (e.g. INSTRUCT) and in the long term the Human Brain Project (FET Flagship), and strengthen the EGI services offering. <br />
<br />
By integrating molecular structural biology and medical imaging services and data, MoBrain will kick-start the development of a larger, integrated, global science virtual research environment for life and brain scientists worldwide. The mini-projects defined in MoBrain are geared toward facilitating this overall objective, each with specific objectives to reinforce existing services, develop new solutions and pave the path to global competence center and virtual research environment for translational research from molecular to brain. <br />
<br />
There are already many services and support/training mechanisms in place that will be further developed, optimized and merged during the operation of the CC, building onto and contributing to the EGI service offering. MoBrain will produce a working environment that will be better tailored to the end user needs than any of its individual components. It will provide an extended portfolio of tools and data in a user-friendly e-laboratory, with a direct relevance for neuroscience, starting from the quantification of the molecular forces, protein folding, biomolecular interactions, drug design and treatments, improved diagnostic and the full characterization of every pathological mechanism of brain diseases through both phenomenological as well as mechanistic approaches. <br />
<br />
<br />
<br />
== MoBrain partners ==<br />
<br />
*Utrecht University, Bijvoet Center for Biomolecular Research, the Netherlands <br />
*Consorzio Interuniversitario Risonanze Magnetiche Di Metalloproteine Paramagnetiche, Florence University, Italy. <br />
*Consejo Superior de Investigaciones Cientificas (and Spanish NGI) <br />
*Science and Technology Facility Council, UK <br />
*Provincia Lombardo Veneta Ordine Ospedaliero di SanGiovanni di Dio – Fatebenefratelli, Italy <br />
*GNUBILA, France <br />
*Istituto Nazionale Di Fisica Nucleare, Italy <br />
*SURFsara (Dutch NGI) <br />
*CESNET (Czech NGI) <br />
*Open Science Grid (US)<br />
<br />
The CC is open for additional members. Please email the CC coordinator to join. <br />
<br />
<br />
== Tasks ==<br />
<br />
<br />
=== T1: Cryo-EM in the cloud: bringing clouds to the data ===<br />
<br />
=== T2: GPU portals for biomolecular simulations ===<br />
<br />
=== T3: Integrating the micro- (WeNMR/INSTRUCT) and macroscopic (NeuGRID4you) virtual research communities ===<br />
<br />
=== T4: User support and training ===<br />
<br />
== Deliverables ==<br />
<br />
* D6.4 Fully integrated MoBrain web portal (OTHER), M12 (02.2016) - by T3<br />
* D6.7 Implementation and evaluation of AMBER and/or GROMACS (R), M13 (03.2016) - by T2<br />
* D6.12 GPGPU-enabled web portal(s) for MoBrain (OTHER), M16 (06.2016) - by T2&3<br />
* D6.14 Scipion cloud deployment for MoBrain (OTHER), M21 (11.2016) - by T1<br />
<br />
<br />
[[Category:EGI-Engage]]<br />
<br />
== Technical documentations ==<br />
<br />
=== How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016)===<br />
<br />
*A docker image with opencl + nvidia drivers + DisVis application and one with PowerFit application ready to run on GPU servers have been prepared for Ubuntu, with the goals of checking performance described [https://drive.google.com/file/d/0B44fOmGpGehGWVVTXzZmU3F1WlE/view here]. <br />
*In collaboration with [https://wiki.egi.eu/wiki/EGI-Engage:TASK_JRA2.4_Accelerated_Computing EGI-Engage task JRA2.4 (Accelerated Computing)] the DisVis and PowerFit docker images have been then re-build to work on the SL6 GPU servers which had a NVIDIA driver 319.x supporting only CUDA 5.5 version. These servers form the grid-enabled cluster at CIRMMP, and successful grid submissions to this cluster using the enmr.eu VO has been carried out with the expected performance the 1st of December.<br />
*'''The 14th of July the CIRMMP servers were updated with the latest NVIDIA driver 352.93 supporting now CUDA 7.5 version. DisVis and PowerFit images used are now the ones maintained by the INDIGO-DataCloud project and distributed via dockerhub at [https://github.com/indigo-dc/ansible-role-disvis-powerfit indigodatacloudapps repository].''' <br />
*Here follows a description of the scripts and commands used to run the test:<br />
<pre>$ voms-proxy-init --voms enmr.eu <br />
$ glite-ce-job-submit -o jobid.txt -a -r cegpu.cerm.unifi.it:8443/cream-pbs-batch disvis.jdl</pre> <br />
where disvis.jdl is:&nbsp; <br />
<br />
[<br />
executable = "disvis.sh";<br />
inputSandbox = { "disvis.sh" ,"O14250.pdb" , "Q9UT97.pdb" , "restraints.dat" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=2;<br />
]<br />
<br />
<br />
and disvis.sh is: <br />
<pre>#!/bin/sh<br />
echo hostname=$(hostname) <br />
echo user=$(id) <br />
export WDIR=`pwd` <br />
mkdir res-gpu<br />
echo docker run disvis... <br />
echo starttime=$(date) <br />
docker run --name=disvis --device=/dev/nvidia0:/dev/nvidia0 --device=/dev/nvidia1:/dev/nvidia1 \<br />
--device=/dev/nvidiactl:/dev/nvidiactl --device=/dev/nvidia-uvm:/dev/nvidia-uvm \<br />
-v $WDIR:/home indigodatacloudapps/disvis /bin/sh \<br />
-c 'disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/res-gpu' <br />
docker rm disvis<br />
echo endtime=$(date) <br />
nvidia-smi --query-accounted-apps=pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv<br />
tar cfz res-gpu.tgz res-gpu</pre> <br />
The example input files specified in the jdl script above were taken from the GitHub repo for disvis available from: [https://github.com/haddocking/disvis https://github.com/haddocking/disvis] <br />
<br />
The performance on the GPGPU grid resources is what is expected for the card type. <br />
<br />
The timings were compared to an in-house GPU node at Utrecht University (GTX680 card): <br />
<pre>GPGPU-type Timing[minutes]<br />
GTX680 19<br />
M2090 (VM) 15.5<br />
1xK20 (VM) 13.5<br />
2xK20 11<br />
1xK20 11</pre> <br />
This indicates that DisVis, as expected, is currently incapable of using both the available GPGPUs. However, we plan to use this testbed to parallelize it on mulitple GPUs, which should be relatively straightforward. Values marked with (VM) refers to GPGPUs hosted in the FedCloud (see next section).<br />
<br />
For running PowerFit just replace disvis.jdl with a powerfit.jdl like e.g.:&nbsp;<br />
<br />
[<br />
executable = "powerfit.sh";<br />
inputSandbox = { "powerfit.sh" ,"1046.map" , "GroES_1gru.pdb" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=2;<br />
]<br />
<br />
with powerfit.sh as [http://pastebin.com/uugj8ZQv here] and input data taken from [http://www.lip.pt/~david/powerfit-example.tgz here] (courtesy of Mario David)<br />
<br />
=== How to run the DisVis and PowerFit on VMs of the Federated Cloud using the enmr.eu VO (updated 14th of July 2016)===<br />
*Scripts are available to instantiate GPGPU-enabled VMs on IISAS (CentOS7 images with Tesla K20m GPUs) and CESNET (Ubuntu images with M2090 GPUs) FedCloud sites, installing latest NVIDIA drivers and DisVis and/or PowerFit software: [http://pastebin.com/5ueQRerr CESNET-create-VM.sh] and [http://pastebin.com/ARxadbn9 IISAS-create-VM.sh]<br />
* The scripts use the following user_data to contextualise the VMs in order to be ready to install the needed software: [http://pastebin.com/f48qMmsN user_data_ubuntu] and [http://pastebin.com/xCdhkTZr user_data_centos7]. Customise them by inserting your public ssh-key. <br />
*After having run the scripts above (voms-proxy-init -voms enmr.eu -r is required before executing them, from a host with an [https://wiki.egi.eu/wiki/Fedcloud-tf:CLI_Environment occi client] installed), you have to ssh in the VM with your ssh-key, then sudo su -, and then execute ./install-gpu-driver.sh. After that the VM is ready to install the application software,by executing ./install-disvis.sh and/or ./install-powerfit.sh. Scripts for running test samples are available in /home/run-disvisGPU.sh and /home/run-powerfitGPU.sh.</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Competence_centre_MoBrain&diff=88786
Competence centre MoBrain
2016-07-17T22:13:27Z
<p>Verlato: /* How to run the DisVis and PowerFit on VMs of the Federated Cloud using the enmr.eu VO (updated 14th of July 2016) */</p>
<hr />
<div>{{Template:EGI-Engage_CC}} <br />
<br />
<br />
{{TOC_right}} <br />
<br />
[[Image:Mobrain.png|300px|right]]<br />
<br />
<br />
'''MoBrain: A Competence Center to Serve Translational Research from Molecule to Brain'''<br />
<br />
<br> <br />
<br> <br />
<br />
'''CC Coordinator:''' Alexandre M.J.J. Bonvin<br />
<br />
'''CC Coordinator deputy:''' Antonio Rosato<br />
<br />
'''CC members' list:''' cc-mobrain AT mailman.egi.eu <br />
<br />
'''CC meetings:''' https://indico.egi.eu/indico/categoryDisplay.py?categId=145 <br />
<br />
<br> <br />
<br />
<br />
== Introduction ==<br />
<br />
Today’s translational research era calls for innovative solutions to enable researchers and clinicians to build bridges between the microscopic (molecular) and macroscopic (human) scales. This requires building both transversal and vertical connections between techniques, researchers and clinicians to provide them with an optimal e-Science toolbox to tackle societal challenges related to health. <br />
<br />
The main objective of the MoBrain Competence Center (CC) is to lower barriers for scientists to access modern e-Science solutions from micro to macro scales. MoBrain builds on grid- and cloud-based infrastructures and on the existing expertise available within WeNMR (www.wenmr.eu), N4U (neugrid4you.eu), and technology providers (NGIs and other institutions, OSG). This initiative aims to serve its user communities, related ESFRI projects (e.g. INSTRUCT) and in the long term the Human Brain Project (FET Flagship), and strengthen the EGI services offering. <br />
<br />
By integrating molecular structural biology and medical imaging services and data, MoBrain will kick-start the development of a larger, integrated, global science virtual research environment for life and brain scientists worldwide. The mini-projects defined in MoBrain are geared toward facilitating this overall objective, each with specific objectives to reinforce existing services, develop new solutions and pave the path to global competence center and virtual research environment for translational research from molecular to brain. <br />
<br />
There are already many services and support/training mechanisms in place that will be further developed, optimized and merged during the operation of the CC, building onto and contributing to the EGI service offering. MoBrain will produce a working environment that will be better tailored to the end user needs than any of its individual components. It will provide an extended portfolio of tools and data in a user-friendly e-laboratory, with a direct relevance for neuroscience, starting from the quantification of the molecular forces, protein folding, biomolecular interactions, drug design and treatments, improved diagnostic and the full characterization of every pathological mechanism of brain diseases through both phenomenological as well as mechanistic approaches. <br />
<br />
<br />
<br />
== MoBrain partners ==<br />
<br />
*Utrecht University, Bijvoet Center for Biomolecular Research, the Netherlands <br />
*Consorzio Interuniversitario Risonanze Magnetiche Di Metalloproteine Paramagnetiche, Florence University, Italy. <br />
*Consejo Superior de Investigaciones Cientificas (and Spanish NGI) <br />
*Science and Technology Facility Council, UK <br />
*Provincia Lombardo Veneta Ordine Ospedaliero di SanGiovanni di Dio – Fatebenefratelli, Italy <br />
*GNUBILA, France <br />
*Istituto Nazionale Di Fisica Nucleare, Italy <br />
*SURFsara (Dutch NGI) <br />
*CESNET (Czech NGI) <br />
*Open Science Grid (US)<br />
<br />
The CC is open for additional members. Please email the CC coordinator to join. <br />
<br />
<br />
== Tasks ==<br />
<br />
<br />
=== T1: Cryo-EM in the cloud: bringing clouds to the data ===<br />
<br />
=== T2: GPU portals for biomolecular simulations ===<br />
<br />
=== T3: Integrating the micro- (WeNMR/INSTRUCT) and macroscopic (NeuGRID4you) virtual research communities ===<br />
<br />
=== T4: User support and training ===<br />
<br />
== Deliverables ==<br />
<br />
* D6.4 Fully integrated MoBrain web portal (OTHER), M12 (02.2016) - by T3<br />
* D6.7 Implementation and evaluation of AMBER and/or GROMACS (R), M13 (03.2016) - by T2<br />
* D6.12 GPGPU-enabled web portal(s) for MoBrain (OTHER), M16 (06.2016) - by T2&3<br />
* D6.14 Scipion cloud deployment for MoBrain (OTHER), M21 (11.2016) - by T1<br />
<br />
<br />
[[Category:EGI-Engage]]<br />
<br />
== Technical documentations ==<br />
<br />
=== How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016)===<br />
<br />
*A docker image with opencl + nvidia drivers + DisVis application and one with PowerFit application ready to run on GPU servers have been prepared for Ubuntu, with the goals of checking performance described [https://drive.google.com/file/d/0B44fOmGpGehGWVVTXzZmU3F1WlE/view here]. <br />
*In collaboration with [https://wiki.egi.eu/wiki/EGI-Engage:TASK_JRA2.4_Accelerated_Computing EGI-Engage task JRA2.4 (Accelerated Computing)] the DisVis and PowerFit docker images have been then re-build to work on the SL6 GPU servers which had a NVIDIA driver 319.x supporting only CUDA 5.5 version. These servers form the grid-enabled cluster at CIRMMP, and successful grid submissions to this cluster using the enmr.eu VO has been carried out with the expected performance the 1st of December.<br />
*'''The 14th of July the CIRMMP servers were updated with the latest NVIDIA driver 352.93 supporting now CUDA 7.5 version. DisVis and PowerFit images used are now the ones maintained by the INDIGO-DataCloud project and distributed via dockerhub at [https://github.com/indigo-dc/ansible-role-disvis-powerfit indigodatacloudapps repository].''' <br />
*Here follows a description of the scripts and commands used to run the test:<br />
<pre>$ voms-proxy-init --voms enmr.eu <br />
$ glite-ce-job-submit -o jobid.txt -a -r cegpu.cerm.unifi.it:8443/cream-pbs-batch disvis.jdl</pre> <br />
where disvis.jdl is:&nbsp; <br />
<br />
[<br />
executable = "disvis.sh";<br />
inputSandbox = { "disvis.sh" ,"O14250.pdb" , "Q9UT97.pdb" , "restraints.dat" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=2;<br />
]<br />
<br />
<br />
and disvis.sh is: <br />
<pre>#!/bin/sh<br />
echo hostname=$(hostname) <br />
echo user=$(id) <br />
export WDIR=`pwd` <br />
mkdir res-gpu<br />
echo docker run disvis... <br />
echo starttime=$(date) <br />
docker run --device=/dev/nvidia0:/dev/nvidia0 --device=/dev/nvidia1:/dev/nvidia1 \<br />
--device=/dev/nvidiactl:/dev/nvidiactl --device=/dev/nvidia-uvm:/dev/nvidia-uvm \<br />
-v $WDIR:/home indigodatacloudapps/disvis /bin/sh \<br />
-c 'disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/res-gpu' <br />
echo endtime=$(date) <br />
nvidia-smi --query-accounted-apps=pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv<br />
tar cfz res-gpu.tgz res-gpu</pre> <br />
The example input files specified in the jdl script above were taken from the GitHub repo for disvis available from: [https://github.com/haddocking/disvis https://github.com/haddocking/disvis] <br />
<br />
The performance on the GPGPU grid resources is what is expected for the card type. <br />
<br />
The timings were compared to an in-house GPU node at Utrecht University (GTX680 card): <br />
<pre>GPGPU-type Timing[minutes]<br />
GTX680 19<br />
M2090 (VM) 15.5<br />
1xK20 (VM) 13.5<br />
2xK20 11<br />
1xK20 11</pre> <br />
This indicates that DisVis, as expected, is currently incapable of using both the available GPGPUs. However, we plan to use this testbed to parallelize it on mulitple GPUs, which should be relatively straightforward. Values marked with (VM) refers to GPGPUs hosted in the FedCloud (see next section).<br />
<br />
For running PowerFit just replace disvis.jdl with a powerfit.jdl like e.g.:&nbsp;<br />
<br />
[<br />
executable = "powerfit.sh";<br />
inputSandbox = { "powerfit.sh" ,"1046.map" , "GroES_1gru.pdb" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=2;<br />
]<br />
<br />
with powerfit.sh as [http://pastebin.com/uugj8ZQv here] and input data taken from [http://www.lip.pt/~david/powerfit-example.tgz here] (courtesy of Mario David)<br />
<br />
=== How to run the DisVis and PowerFit on VMs of the Federated Cloud using the enmr.eu VO (updated 14th of July 2016)===<br />
*Scripts are available to instantiate GPGPU-enabled VMs on IISAS (CentOS7 images with Tesla K20m GPUs) and CESNET (Ubuntu images with M2090 GPUs) FedCloud sites, installing latest NVIDIA drivers and DisVis and/or PowerFit software: [http://pastebin.com/5ueQRerr CESNET-create-VM.sh] and [http://pastebin.com/ARxadbn9 IISAS-create-VM.sh]<br />
* The scripts use the following user_data to contextualise the VMs in order to be ready to install the needed software: [http://pastebin.com/f48qMmsN user_data_ubuntu] and [http://pastebin.com/xCdhkTZr user_data_centos7]. Customise them by inserting your public ssh-key. <br />
*After having run the scripts above (voms-proxy-init -voms enmr.eu -r is required before executing them, from a host with an [https://wiki.egi.eu/wiki/Fedcloud-tf:CLI_Environment occi client] installed), you have to ssh in the VM with your ssh-key, then sudo su -, and then execute ./install-gpu-driver.sh. After that the VM is ready to install the application software,by executing ./install-disvis.sh and/or ./install-powerfit.sh. Scripts for running test samples are available in /home/run-disvisGPU.sh and /home/run-powerfitGPU.sh.</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Competence_centre_MoBrain&diff=88785
Competence centre MoBrain
2016-07-17T22:02:42Z
<p>Verlato: /* How to run the DisVis and PowerFit on VMs of the Federated Cloud using the enmr.eu VO (updated 14th of July 2016) */</p>
<hr />
<div>{{Template:EGI-Engage_CC}} <br />
<br />
<br />
{{TOC_right}} <br />
<br />
[[Image:Mobrain.png|300px|right]]<br />
<br />
<br />
'''MoBrain: A Competence Center to Serve Translational Research from Molecule to Brain'''<br />
<br />
<br> <br />
<br> <br />
<br />
'''CC Coordinator:''' Alexandre M.J.J. Bonvin<br />
<br />
'''CC Coordinator deputy:''' Antonio Rosato<br />
<br />
'''CC members' list:''' cc-mobrain AT mailman.egi.eu <br />
<br />
'''CC meetings:''' https://indico.egi.eu/indico/categoryDisplay.py?categId=145 <br />
<br />
<br> <br />
<br />
<br />
== Introduction ==<br />
<br />
Today’s translational research era calls for innovative solutions to enable researchers and clinicians to build bridges between the microscopic (molecular) and macroscopic (human) scales. This requires building both transversal and vertical connections between techniques, researchers and clinicians to provide them with an optimal e-Science toolbox to tackle societal challenges related to health. <br />
<br />
The main objective of the MoBrain Competence Center (CC) is to lower barriers for scientists to access modern e-Science solutions from micro to macro scales. MoBrain builds on grid- and cloud-based infrastructures and on the existing expertise available within WeNMR (www.wenmr.eu), N4U (neugrid4you.eu), and technology providers (NGIs and other institutions, OSG). This initiative aims to serve its user communities, related ESFRI projects (e.g. INSTRUCT) and in the long term the Human Brain Project (FET Flagship), and strengthen the EGI services offering. <br />
<br />
By integrating molecular structural biology and medical imaging services and data, MoBrain will kick-start the development of a larger, integrated, global science virtual research environment for life and brain scientists worldwide. The mini-projects defined in MoBrain are geared toward facilitating this overall objective, each with specific objectives to reinforce existing services, develop new solutions and pave the path to global competence center and virtual research environment for translational research from molecular to brain. <br />
<br />
There are already many services and support/training mechanisms in place that will be further developed, optimized and merged during the operation of the CC, building onto and contributing to the EGI service offering. MoBrain will produce a working environment that will be better tailored to the end user needs than any of its individual components. It will provide an extended portfolio of tools and data in a user-friendly e-laboratory, with a direct relevance for neuroscience, starting from the quantification of the molecular forces, protein folding, biomolecular interactions, drug design and treatments, improved diagnostic and the full characterization of every pathological mechanism of brain diseases through both phenomenological as well as mechanistic approaches. <br />
<br />
<br />
<br />
== MoBrain partners ==<br />
<br />
*Utrecht University, Bijvoet Center for Biomolecular Research, the Netherlands <br />
*Consorzio Interuniversitario Risonanze Magnetiche Di Metalloproteine Paramagnetiche, Florence University, Italy. <br />
*Consejo Superior de Investigaciones Cientificas (and Spanish NGI) <br />
*Science and Technology Facility Council, UK <br />
*Provincia Lombardo Veneta Ordine Ospedaliero di SanGiovanni di Dio – Fatebenefratelli, Italy <br />
*GNUBILA, France <br />
*Istituto Nazionale Di Fisica Nucleare, Italy <br />
*SURFsara (Dutch NGI) <br />
*CESNET (Czech NGI) <br />
*Open Science Grid (US)<br />
<br />
The CC is open for additional members. Please email the CC coordinator to join. <br />
<br />
<br />
== Tasks ==<br />
<br />
<br />
=== T1: Cryo-EM in the cloud: bringing clouds to the data ===<br />
<br />
=== T2: GPU portals for biomolecular simulations ===<br />
<br />
=== T3: Integrating the micro- (WeNMR/INSTRUCT) and macroscopic (NeuGRID4you) virtual research communities ===<br />
<br />
=== T4: User support and training ===<br />
<br />
== Deliverables ==<br />
<br />
* D6.4 Fully integrated MoBrain web portal (OTHER), M12 (02.2016) - by T3<br />
* D6.7 Implementation and evaluation of AMBER and/or GROMACS (R), M13 (03.2016) - by T2<br />
* D6.12 GPGPU-enabled web portal(s) for MoBrain (OTHER), M16 (06.2016) - by T2&3<br />
* D6.14 Scipion cloud deployment for MoBrain (OTHER), M21 (11.2016) - by T1<br />
<br />
<br />
[[Category:EGI-Engage]]<br />
<br />
== Technical documentations ==<br />
<br />
=== How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016)===<br />
<br />
*A docker image with opencl + nvidia drivers + DisVis application and one with PowerFit application ready to run on GPU servers have been prepared for Ubuntu, with the goals of checking performance described [https://drive.google.com/file/d/0B44fOmGpGehGWVVTXzZmU3F1WlE/view here]. <br />
*In collaboration with [https://wiki.egi.eu/wiki/EGI-Engage:TASK_JRA2.4_Accelerated_Computing EGI-Engage task JRA2.4 (Accelerated Computing)] the DisVis and PowerFit docker images have been then re-build to work on the SL6 GPU servers which had a NVIDIA driver 319.x supporting only CUDA 5.5 version. These servers form the grid-enabled cluster at CIRMMP, and successful grid submissions to this cluster using the enmr.eu VO has been carried out with the expected performance the 1st of December.<br />
*'''The 14th of July the CIRMMP servers were updated with the latest NVIDIA driver 352.93 supporting now CUDA 7.5 version. DisVis and PowerFit images used are now the ones maintained by the INDIGO-DataCloud project and distributed via dockerhub at [https://github.com/indigo-dc/ansible-role-disvis-powerfit indigodatacloudapps repository].''' <br />
*Here follows a description of the scripts and commands used to run the test:<br />
<pre>$ voms-proxy-init --voms enmr.eu <br />
$ glite-ce-job-submit -o jobid.txt -a -r cegpu.cerm.unifi.it:8443/cream-pbs-batch disvis.jdl</pre> <br />
where disvis.jdl is:&nbsp; <br />
<br />
[<br />
executable = "disvis.sh";<br />
inputSandbox = { "disvis.sh" ,"O14250.pdb" , "Q9UT97.pdb" , "restraints.dat" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=2;<br />
]<br />
<br />
<br />
and disvis.sh is: <br />
<pre>#!/bin/sh<br />
echo hostname=$(hostname) <br />
echo user=$(id) <br />
export WDIR=`pwd` <br />
mkdir res-gpu<br />
echo docker run disvis... <br />
echo starttime=$(date) <br />
docker run --device=/dev/nvidia0:/dev/nvidia0 --device=/dev/nvidia1:/dev/nvidia1 \<br />
--device=/dev/nvidiactl:/dev/nvidiactl --device=/dev/nvidia-uvm:/dev/nvidia-uvm \<br />
-v $WDIR:/home indigodatacloudapps/disvis /bin/sh \<br />
-c 'disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/res-gpu' <br />
echo endtime=$(date) <br />
nvidia-smi --query-accounted-apps=pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv<br />
tar cfz res-gpu.tgz res-gpu</pre> <br />
The example input files specified in the jdl script above were taken from the GitHub repo for disvis available from: [https://github.com/haddocking/disvis https://github.com/haddocking/disvis] <br />
<br />
The performance on the GPGPU grid resources is what is expected for the card type. <br />
<br />
The timings were compared to an in-house GPU node at Utrecht University (GTX680 card): <br />
<pre>GPGPU-type Timing[minutes]<br />
GTX680 19<br />
M2090 (VM) 15.5<br />
1xK20 (VM) 13.5<br />
2xK20 11<br />
1xK20 11</pre> <br />
This indicates that DisVis, as expected, is currently incapable of using both the available GPGPUs. However, we plan to use this testbed to parallelize it on mulitple GPUs, which should be relatively straightforward. Values marked with (VM) refers to GPGPUs hosted in the FedCloud (see next section).<br />
<br />
For running PowerFit just replace disvis.jdl with a powerfit.jdl like e.g.:&nbsp;<br />
<br />
[<br />
executable = "powerfit.sh";<br />
inputSandbox = { "powerfit.sh" ,"1046.map" , "GroES_1gru.pdb" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=2;<br />
]<br />
<br />
with powerfit.sh as [http://pastebin.com/uugj8ZQv here] and input data taken from [http://www.lip.pt/~david/powerfit-example.tgz here] (courtesy of Mario David)<br />
<br />
=== How to run the DisVis and PowerFit on VMs of the Federated Cloud using the enmr.eu VO (updated 14th of July 2016)===<br />
*Scripts are available to instantiate GPGPU-enabled VMs on IISAS (CentOS7 images with Tesla K20m GPUs) and CESNET (Ubuntu images with M2090 GPUs) FedCloud sites, installing latest NVIDIA drivers and DisVis and/or PowerFit software: [http://pastebin.com/5ueQRerr CESNET-create-VM.sh] and [http://pastebin.com/ARxadbn9 IISAS-create-VM.sh]<br />
* The scripts use the following user_data to contextualise the VMs in order to be ready to install the needed software: [http://pastebin.com/f48qMmsN user_data_ubuntu] and [http://pastebin.com/xCdhkTZr user_data_centos7]. Customise them by inserting your public ssh-key. <br />
*After having run the scripts above (voms-proxy-init -voms enmr.eu -r is required before executing them, from a host with an [https://wiki.egi.eu/wiki/Fedcloud-tf:CLI_Environment occi client] installed), you have to ssh in the VMs with your ssh-key, then sudo su -, and then execute ./install-gpu-driver.sh. After that the VM is ready to install the application software,by executing ./install-disvis.sh and/or ./install-powerfit.sh. Scripts for running test samples are available in /home/run-disvisGPU.sh and /home/run-powerfitGPU.sh.</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Competence_centre_MoBrain&diff=88784
Competence centre MoBrain
2016-07-17T21:40:58Z
<p>Verlato: /* How to run the DisVis on VMs of the Federated Cloud using the enmr.eu VO (updated 14th of July 2016) */</p>
<hr />
<div>{{Template:EGI-Engage_CC}} <br />
<br />
<br />
{{TOC_right}} <br />
<br />
[[Image:Mobrain.png|300px|right]]<br />
<br />
<br />
'''MoBrain: A Competence Center to Serve Translational Research from Molecule to Brain'''<br />
<br />
<br> <br />
<br> <br />
<br />
'''CC Coordinator:''' Alexandre M.J.J. Bonvin<br />
<br />
'''CC Coordinator deputy:''' Antonio Rosato<br />
<br />
'''CC members' list:''' cc-mobrain AT mailman.egi.eu <br />
<br />
'''CC meetings:''' https://indico.egi.eu/indico/categoryDisplay.py?categId=145 <br />
<br />
<br> <br />
<br />
<br />
== Introduction ==<br />
<br />
Today’s translational research era calls for innovative solutions to enable researchers and clinicians to build bridges between the microscopic (molecular) and macroscopic (human) scales. This requires building both transversal and vertical connections between techniques, researchers and clinicians to provide them with an optimal e-Science toolbox to tackle societal challenges related to health. <br />
<br />
The main objective of the MoBrain Competence Center (CC) is to lower barriers for scientists to access modern e-Science solutions from micro to macro scales. MoBrain builds on grid- and cloud-based infrastructures and on the existing expertise available within WeNMR (www.wenmr.eu), N4U (neugrid4you.eu), and technology providers (NGIs and other institutions, OSG). This initiative aims to serve its user communities, related ESFRI projects (e.g. INSTRUCT) and in the long term the Human Brain Project (FET Flagship), and strengthen the EGI services offering. <br />
<br />
By integrating molecular structural biology and medical imaging services and data, MoBrain will kick-start the development of a larger, integrated, global science virtual research environment for life and brain scientists worldwide. The mini-projects defined in MoBrain are geared toward facilitating this overall objective, each with specific objectives to reinforce existing services, develop new solutions and pave the path to global competence center and virtual research environment for translational research from molecular to brain. <br />
<br />
There are already many services and support/training mechanisms in place that will be further developed, optimized and merged during the operation of the CC, building onto and contributing to the EGI service offering. MoBrain will produce a working environment that will be better tailored to the end user needs than any of its individual components. It will provide an extended portfolio of tools and data in a user-friendly e-laboratory, with a direct relevance for neuroscience, starting from the quantification of the molecular forces, protein folding, biomolecular interactions, drug design and treatments, improved diagnostic and the full characterization of every pathological mechanism of brain diseases through both phenomenological as well as mechanistic approaches. <br />
<br />
<br />
<br />
== MoBrain partners ==<br />
<br />
*Utrecht University, Bijvoet Center for Biomolecular Research, the Netherlands <br />
*Consorzio Interuniversitario Risonanze Magnetiche Di Metalloproteine Paramagnetiche, Florence University, Italy. <br />
*Consejo Superior de Investigaciones Cientificas (and Spanish NGI) <br />
*Science and Technology Facility Council, UK <br />
*Provincia Lombardo Veneta Ordine Ospedaliero di SanGiovanni di Dio – Fatebenefratelli, Italy <br />
*GNUBILA, France <br />
*Istituto Nazionale Di Fisica Nucleare, Italy <br />
*SURFsara (Dutch NGI) <br />
*CESNET (Czech NGI) <br />
*Open Science Grid (US)<br />
<br />
The CC is open for additional members. Please email the CC coordinator to join. <br />
<br />
<br />
== Tasks ==<br />
<br />
<br />
=== T1: Cryo-EM in the cloud: bringing clouds to the data ===<br />
<br />
=== T2: GPU portals for biomolecular simulations ===<br />
<br />
=== T3: Integrating the micro- (WeNMR/INSTRUCT) and macroscopic (NeuGRID4you) virtual research communities ===<br />
<br />
=== T4: User support and training ===<br />
<br />
== Deliverables ==<br />
<br />
* D6.4 Fully integrated MoBrain web portal (OTHER), M12 (02.2016) - by T3<br />
* D6.7 Implementation and evaluation of AMBER and/or GROMACS (R), M13 (03.2016) - by T2<br />
* D6.12 GPGPU-enabled web portal(s) for MoBrain (OTHER), M16 (06.2016) - by T2&3<br />
* D6.14 Scipion cloud deployment for MoBrain (OTHER), M21 (11.2016) - by T1<br />
<br />
<br />
[[Category:EGI-Engage]]<br />
<br />
== Technical documentations ==<br />
<br />
=== How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016)===<br />
<br />
*A docker image with opencl + nvidia drivers + DisVis application and one with PowerFit application ready to run on GPU servers have been prepared for Ubuntu, with the goals of checking performance described [https://drive.google.com/file/d/0B44fOmGpGehGWVVTXzZmU3F1WlE/view here]. <br />
*In collaboration with [https://wiki.egi.eu/wiki/EGI-Engage:TASK_JRA2.4_Accelerated_Computing EGI-Engage task JRA2.4 (Accelerated Computing)] the DisVis and PowerFit docker images have been then re-build to work on the SL6 GPU servers which had a NVIDIA driver 319.x supporting only CUDA 5.5 version. These servers form the grid-enabled cluster at CIRMMP, and successful grid submissions to this cluster using the enmr.eu VO has been carried out with the expected performance the 1st of December.<br />
*'''The 14th of July the CIRMMP servers were updated with the latest NVIDIA driver 352.93 supporting now CUDA 7.5 version. DisVis and PowerFit images used are now the ones maintained by the INDIGO-DataCloud project and distributed via dockerhub at [https://github.com/indigo-dc/ansible-role-disvis-powerfit indigodatacloudapps repository].''' <br />
*Here follows a description of the scripts and commands used to run the test:<br />
<pre>$ voms-proxy-init --voms enmr.eu <br />
$ glite-ce-job-submit -o jobid.txt -a -r cegpu.cerm.unifi.it:8443/cream-pbs-batch disvis.jdl</pre> <br />
where disvis.jdl is:&nbsp; <br />
<br />
[<br />
executable = "disvis.sh";<br />
inputSandbox = { "disvis.sh" ,"O14250.pdb" , "Q9UT97.pdb" , "restraints.dat" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=2;<br />
]<br />
<br />
<br />
and disvis.sh is: <br />
<pre>#!/bin/sh<br />
echo hostname=$(hostname) <br />
echo user=$(id) <br />
export WDIR=`pwd` <br />
mkdir res-gpu<br />
echo docker run disvis... <br />
echo starttime=$(date) <br />
docker run --device=/dev/nvidia0:/dev/nvidia0 --device=/dev/nvidia1:/dev/nvidia1 \<br />
--device=/dev/nvidiactl:/dev/nvidiactl --device=/dev/nvidia-uvm:/dev/nvidia-uvm \<br />
-v $WDIR:/home indigodatacloudapps/disvis /bin/sh \<br />
-c 'disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/res-gpu' <br />
echo endtime=$(date) <br />
nvidia-smi --query-accounted-apps=pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv<br />
tar cfz res-gpu.tgz res-gpu</pre> <br />
The example input files specified in the jdl script above were taken from the GitHub repo for disvis available from: [https://github.com/haddocking/disvis https://github.com/haddocking/disvis] <br />
<br />
The performance on the GPGPU grid resources is what is expected for the card type. <br />
<br />
The timings were compared to an in-house GPU node at Utrecht University (GTX680 card): <br />
<pre>GPGPU-type Timing[minutes]<br />
GTX680 19<br />
M2090 (VM) 15.5<br />
1xK20 (VM) 13.5<br />
2xK20 11<br />
1xK20 11</pre> <br />
This indicates that DisVis, as expected, is currently incapable of using both the available GPGPUs. However, we plan to use this testbed to parallelize it on mulitple GPUs, which should be relatively straightforward. Values marked with (VM) refers to GPGPUs hosted in the FedCloud (see next section).<br />
<br />
For running PowerFit just replace disvis.jdl with a powerfit.jdl like e.g.:&nbsp;<br />
<br />
[<br />
executable = "powerfit.sh";<br />
inputSandbox = { "powerfit.sh" ,"1046.map" , "GroES_1gru.pdb" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=2;<br />
]<br />
<br />
with powerfit.sh as [http://pastebin.com/uugj8ZQv here] and input data taken from [http://www.lip.pt/~david/powerfit-example.tgz here] (courtesy of Mario David)<br />
<br />
=== How to run the DisVis and PowerFit on VMs of the Federated Cloud using the enmr.eu VO (updated 14th of July 2016)===<br />
*Scripts are available to instantiate GPGPU-enabled VMs on IISAS (CentOS7 images with Tesla K20m GPUs) and CESNET (Ubuntu images with M2090 GPUs) FedCloud sites, installing latest NVIDIA drivers and DisVis and/or PowerFit software: [http://pastebin.com/5ueQRerr CESNET-create-VM.sh] and [http://pastebin.com/ARxadbn9 IISAS-create-VM.sh]<br />
* The scripts use the following user_data to contextualise the VMs in order to be ready to install the needed software: [http://pastebin.com/f48qMmsN user_data_ubuntu] and [http://pastebin.com/xCdhkTZr user_data_centos7]. Customise them by inserting your public ssh-key. <br />
*After having run the scripts above (voms-proxy-init -voms enmr.eu -r is required before executing them, from a host with an occi client installed), you have to ssh in the VMs with your ssh-key, then sudo su -, and then execute ./install-gpu-driver.sh. After that the VM is ready to install the application software,by executing ./install-disvis.sh and/or ./install-powerfit.sh. Scripts for running test samples are available in /home/run-disvisGPU.sh and /home/run-powerfitGPU.sh.</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Competence_centre_MoBrain&diff=88750
Competence centre MoBrain
2016-07-14T16:45:46Z
<p>Verlato: /* How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016) */</p>
<hr />
<div>{{Template:EGI-Engage_CC}} <br />
<br />
<br />
{{TOC_right}} <br />
<br />
[[Image:Mobrain.png|300px|right]]<br />
<br />
<br />
'''MoBrain: A Competence Center to Serve Translational Research from Molecule to Brain'''<br />
<br />
<br> <br />
<br> <br />
<br />
'''CC Coordinator:''' Alexandre M.J.J. Bonvin<br />
<br />
'''CC Coordinator deputy:''' Antonio Rosato<br />
<br />
'''CC members' list:''' cc-mobrain AT mailman.egi.eu <br />
<br />
'''CC meetings:''' https://indico.egi.eu/indico/categoryDisplay.py?categId=145 <br />
<br />
<br> <br />
<br />
<br />
== Introduction ==<br />
<br />
Today’s translational research era calls for innovative solutions to enable researchers and clinicians to build bridges between the microscopic (molecular) and macroscopic (human) scales. This requires building both transversal and vertical connections between techniques, researchers and clinicians to provide them with an optimal e-Science toolbox to tackle societal challenges related to health. <br />
<br />
The main objective of the MoBrain Competence Center (CC) is to lower barriers for scientists to access modern e-Science solutions from micro to macro scales. MoBrain builds on grid- and cloud-based infrastructures and on the existing expertise available within WeNMR (www.wenmr.eu), N4U (neugrid4you.eu), and technology providers (NGIs and other institutions, OSG). This initiative aims to serve its user communities, related ESFRI projects (e.g. INSTRUCT) and in the long term the Human Brain Project (FET Flagship), and strengthen the EGI services offering. <br />
<br />
By integrating molecular structural biology and medical imaging services and data, MoBrain will kick-start the development of a larger, integrated, global science virtual research environment for life and brain scientists worldwide. The mini-projects defined in MoBrain are geared toward facilitating this overall objective, each with specific objectives to reinforce existing services, develop new solutions and pave the path to global competence center and virtual research environment for translational research from molecular to brain. <br />
<br />
There are already many services and support/training mechanisms in place that will be further developed, optimized and merged during the operation of the CC, building onto and contributing to the EGI service offering. MoBrain will produce a working environment that will be better tailored to the end user needs than any of its individual components. It will provide an extended portfolio of tools and data in a user-friendly e-laboratory, with a direct relevance for neuroscience, starting from the quantification of the molecular forces, protein folding, biomolecular interactions, drug design and treatments, improved diagnostic and the full characterization of every pathological mechanism of brain diseases through both phenomenological as well as mechanistic approaches. <br />
<br />
<br />
<br />
== MoBrain partners ==<br />
<br />
*Utrecht University, Bijvoet Center for Biomolecular Research, the Netherlands <br />
*Consorzio Interuniversitario Risonanze Magnetiche Di Metalloproteine Paramagnetiche, Florence University, Italy. <br />
*Consejo Superior de Investigaciones Cientificas (and Spanish NGI) <br />
*Science and Technology Facility Council, UK <br />
*Provincia Lombardo Veneta Ordine Ospedaliero di SanGiovanni di Dio – Fatebenefratelli, Italy <br />
*GNUBILA, France <br />
*Istituto Nazionale Di Fisica Nucleare, Italy <br />
*SURFsara (Dutch NGI) <br />
*CESNET (Czech NGI) <br />
*Open Science Grid (US)<br />
<br />
The CC is open for additional members. Please email the CC coordinator to join. <br />
<br />
<br />
== Tasks ==<br />
<br />
<br />
=== T1: Cryo-EM in the cloud: bringing clouds to the data ===<br />
<br />
=== T2: GPU portals for biomolecular simulations ===<br />
<br />
=== T3: Integrating the micro- (WeNMR/INSTRUCT) and macroscopic (NeuGRID4you) virtual research communities ===<br />
<br />
=== T4: User support and training ===<br />
<br />
== Deliverables ==<br />
<br />
* D6.4 Fully integrated MoBrain web portal (OTHER), M12 (02.2016) - by T3<br />
* D6.7 Implementation and evaluation of AMBER and/or GROMACS (R), M13 (03.2016) - by T2<br />
* D6.12 GPGPU-enabled web portal(s) for MoBrain (OTHER), M16 (06.2016) - by T2&3<br />
* D6.14 Scipion cloud deployment for MoBrain (OTHER), M21 (11.2016) - by T1<br />
<br />
<br />
[[Category:EGI-Engage]]<br />
<br />
== Technical documentations ==<br />
<br />
=== How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016)===<br />
<br />
*A docker image with opencl + nvidia drivers + DisVis application and one with PowerFit application ready to run on GPU servers have been prepared for Ubuntu, with the goals of checking performance described [https://drive.google.com/file/d/0B44fOmGpGehGWVVTXzZmU3F1WlE/view here]. <br />
*In collaboration with [https://wiki.egi.eu/wiki/EGI-Engage:TASK_JRA2.4_Accelerated_Computing EGI-Engage task JRA2.4 (Accelerated Computing)] the DisVis and PowerFit docker images have been then re-build to work on the SL6 GPU servers which had a NVIDIA driver 319.x supporting only CUDA 5.5 version. These servers form the grid-enabled cluster at CIRMMP, and successful grid submissions to this cluster using the enmr.eu VO has been carried out with the expected performance the 1st of December.<br />
*'''The 14th of July the CIRMMP servers were updated with the latest NVIDIA driver 352.93 supporting now CUDA 7.5 version. DisVis and PowerFit images used are now the ones maintained by the INDIGO-DataCloud project and distributed via dockerhub at [https://github.com/indigo-dc/ansible-role-disvis-powerfit indigodatacloudapps repository].''' <br />
*Here follows a description of the scripts and commands used to run the test:<br />
<pre>$ voms-proxy-init --voms enmr.eu <br />
$ glite-ce-job-submit -o jobid.txt -a -r cegpu.cerm.unifi.it:8443/cream-pbs-batch disvis.jdl</pre> <br />
where disvis.jdl is:&nbsp; <br />
<br />
[<br />
executable = "disvis.sh";<br />
inputSandbox = { "disvis.sh" ,"O14250.pdb" , "Q9UT97.pdb" , "restraints.dat" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=2;<br />
]<br />
<br />
<br />
and disvis.sh is: <br />
<pre>#!/bin/sh<br />
echo hostname=$(hostname) <br />
echo user=$(id) <br />
export WDIR=`pwd` <br />
mkdir res-gpu<br />
echo docker run disvis... <br />
echo starttime=$(date) <br />
docker run --device=/dev/nvidia0:/dev/nvidia0 --device=/dev/nvidia1:/dev/nvidia1 \<br />
--device=/dev/nvidiactl:/dev/nvidiactl --device=/dev/nvidia-uvm:/dev/nvidia-uvm \<br />
-v $WDIR:/home indigodatacloudapps/disvis /bin/sh \<br />
-c 'disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/res-gpu' <br />
echo endtime=$(date) <br />
nvidia-smi --query-accounted-apps=pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv<br />
tar cfz res-gpu.tgz res-gpu</pre> <br />
The example input files specified in the jdl script above were taken from the GitHub repo for disvis available from: [https://github.com/haddocking/disvis https://github.com/haddocking/disvis] <br />
<br />
The performance on the GPGPU grid resources is what is expected for the card type. <br />
<br />
The timings were compared to an in-house GPU node at Utrecht University (GTX680 card): <br />
<pre>GPGPU-type Timing[minutes]<br />
GTX680 19<br />
M2090 (VM) 15.5<br />
1xK20 (VM) 13.5<br />
2xK20 11<br />
1xK20 11</pre> <br />
This indicates that DisVis, as expected, is currently incapable of using both the available GPGPUs. However, we plan to use this testbed to parallelize it on mulitple GPUs, which should be relatively straightforward. Values marked with (VM) refers to GPGPUs hosted in the FedCloud (see next section).<br />
<br />
For running PowerFit just replace disvis.jdl with a powerfit.jdl like e.g.:&nbsp;<br />
<br />
[<br />
executable = "powerfit.sh";<br />
inputSandbox = { "powerfit.sh" ,"1046.map" , "GroES_1gru.pdb" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=2;<br />
]<br />
<br />
with powerfit.sh as [http://pastebin.com/uugj8ZQv here] and input data taken from [http://www.lip.pt/~david/powerfit-example.tgz here] (courtesy of Mario David)<br />
<br />
=== How to run the DisVis on VMs of the Federated Cloud using the enmr.eu VO (updated 14th of July 2016)===<br />
*Scripts are available to instantiate GPGPU-enabled VMs on IISAS (CentOS7 images with Tesla K20m GPUs) and CESNET (Ubuntu images with M2090 GPUs) FedCloud sites, installing latest NVIDIA drivers and DisVis software: [http://pastebin.com/5ueQRerr CESNET-create-VM.sh] and [http://pastebin.com/ARxadbn9 IISAS-create-VM.sh]<br />
* The scripts use the following user_data to contextualise the VMs in order to be ready to install the needed software: [http://pastebin.com/f48qMmsN user_data_ubuntu] and [http://pastebin.com/xCdhkTZr user_data_centos7]. Customise them by inserting your public ssh-key. <br />
*After having run the scripts above (voms-proxy-init -voms enmr.eu -r is required before executing them, from a host with an occi client installed), you have to ssh in the VMs with your ssh-key, then sudo su -, and then execute ./install-gpu-driver.sh (it will also install DisVis software). After that the VM is ready to execute disvis.</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Competence_centre_MoBrain&diff=88749
Competence centre MoBrain
2016-07-14T15:55:06Z
<p>Verlato: /* How to run the DisVis on VMs of the Federated Cloud using the enmr.eu VO (updated 14th of July 2016) */</p>
<hr />
<div>{{Template:EGI-Engage_CC}} <br />
<br />
<br />
{{TOC_right}} <br />
<br />
[[Image:Mobrain.png|300px|right]]<br />
<br />
<br />
'''MoBrain: A Competence Center to Serve Translational Research from Molecule to Brain'''<br />
<br />
<br> <br />
<br> <br />
<br />
'''CC Coordinator:''' Alexandre M.J.J. Bonvin<br />
<br />
'''CC Coordinator deputy:''' Antonio Rosato<br />
<br />
'''CC members' list:''' cc-mobrain AT mailman.egi.eu <br />
<br />
'''CC meetings:''' https://indico.egi.eu/indico/categoryDisplay.py?categId=145 <br />
<br />
<br> <br />
<br />
<br />
== Introduction ==<br />
<br />
Today’s translational research era calls for innovative solutions to enable researchers and clinicians to build bridges between the microscopic (molecular) and macroscopic (human) scales. This requires building both transversal and vertical connections between techniques, researchers and clinicians to provide them with an optimal e-Science toolbox to tackle societal challenges related to health. <br />
<br />
The main objective of the MoBrain Competence Center (CC) is to lower barriers for scientists to access modern e-Science solutions from micro to macro scales. MoBrain builds on grid- and cloud-based infrastructures and on the existing expertise available within WeNMR (www.wenmr.eu), N4U (neugrid4you.eu), and technology providers (NGIs and other institutions, OSG). This initiative aims to serve its user communities, related ESFRI projects (e.g. INSTRUCT) and in the long term the Human Brain Project (FET Flagship), and strengthen the EGI services offering. <br />
<br />
By integrating molecular structural biology and medical imaging services and data, MoBrain will kick-start the development of a larger, integrated, global science virtual research environment for life and brain scientists worldwide. The mini-projects defined in MoBrain are geared toward facilitating this overall objective, each with specific objectives to reinforce existing services, develop new solutions and pave the path to global competence center and virtual research environment for translational research from molecular to brain. <br />
<br />
There are already many services and support/training mechanisms in place that will be further developed, optimized and merged during the operation of the CC, building onto and contributing to the EGI service offering. MoBrain will produce a working environment that will be better tailored to the end user needs than any of its individual components. It will provide an extended portfolio of tools and data in a user-friendly e-laboratory, with a direct relevance for neuroscience, starting from the quantification of the molecular forces, protein folding, biomolecular interactions, drug design and treatments, improved diagnostic and the full characterization of every pathological mechanism of brain diseases through both phenomenological as well as mechanistic approaches. <br />
<br />
<br />
<br />
== MoBrain partners ==<br />
<br />
*Utrecht University, Bijvoet Center for Biomolecular Research, the Netherlands <br />
*Consorzio Interuniversitario Risonanze Magnetiche Di Metalloproteine Paramagnetiche, Florence University, Italy. <br />
*Consejo Superior de Investigaciones Cientificas (and Spanish NGI) <br />
*Science and Technology Facility Council, UK <br />
*Provincia Lombardo Veneta Ordine Ospedaliero di SanGiovanni di Dio – Fatebenefratelli, Italy <br />
*GNUBILA, France <br />
*Istituto Nazionale Di Fisica Nucleare, Italy <br />
*SURFsara (Dutch NGI) <br />
*CESNET (Czech NGI) <br />
*Open Science Grid (US)<br />
<br />
The CC is open for additional members. Please email the CC coordinator to join. <br />
<br />
<br />
== Tasks ==<br />
<br />
<br />
=== T1: Cryo-EM in the cloud: bringing clouds to the data ===<br />
<br />
=== T2: GPU portals for biomolecular simulations ===<br />
<br />
=== T3: Integrating the micro- (WeNMR/INSTRUCT) and macroscopic (NeuGRID4you) virtual research communities ===<br />
<br />
=== T4: User support and training ===<br />
<br />
== Deliverables ==<br />
<br />
* D6.4 Fully integrated MoBrain web portal (OTHER), M12 (02.2016) - by T3<br />
* D6.7 Implementation and evaluation of AMBER and/or GROMACS (R), M13 (03.2016) - by T2<br />
* D6.12 GPGPU-enabled web portal(s) for MoBrain (OTHER), M16 (06.2016) - by T2&3<br />
* D6.14 Scipion cloud deployment for MoBrain (OTHER), M21 (11.2016) - by T1<br />
<br />
<br />
[[Category:EGI-Engage]]<br />
<br />
== Technical documentations ==<br />
<br />
=== How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016)===<br />
<br />
*A docker image with opencl + nvidia drivers + DisVis application and one with PowerFit application ready to run on GPU servers have been prepared for Ubuntu, with the goals of checking performance described [https://drive.google.com/file/d/0B44fOmGpGehGWVVTXzZmU3F1WlE/view here]. <br />
*In collaboration with [https://wiki.egi.eu/wiki/EGI-Engage:TASK_JRA2.4_Accelerated_Computing EGI-Engage task JRA2.4 (Accelerated Computing)] the DisVis and PowerFit docker images have been then re-build to work on the SL6 GPU servers which had a NVIDIA driver 319.x supporting only CUDA 5.5 version. These servers form the grid-enabled cluster at CIRMMP, and successful grid submissions to this cluster using the enmr.eu VO has been carried out with the expected performance the 1st of December.<br />
*'''The 14th of July the CIRMMP servers were updated with the latest NVIDIA driver 352.93 supporting now CUDA 7.5 version. DisVis and PowerFit images used are now the ones maintained by the INDIGO-DataCloud project and distributed via dockerhub at [https://github.com/indigo-dc/ansible-role-disvis-powerfit indigodatacloudapps repository].''' <br />
*Here follows a description of the scripts and commands used to run the test:<br />
<pre>$ voms-proxy-init --voms enmr.eu <br />
$ glite-ce-job-submit -o jobid.txt -a -r cegpu.cerm.unifi.it:8443/cream-pbs-batch disvis.jdl</pre> <br />
where disvis.jdl is:&nbsp; <br />
<br />
[<br />
executable = "disvis.sh";<br />
inputSandbox = { "disvis.sh" ,"O14250.pdb" , "Q9UT97.pdb" , "restraints.dat" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=2;<br />
]<br />
<br />
<br />
and disvis.sh is: <br />
<pre>#!/bin/sh<br />
echo hostname=$(hostname) <br />
echo user=$(id) <br />
export WDIR=`pwd` <br />
mkdir res-gpu<br />
echo docker run disvis... <br />
echo starttime=$(date) <br />
docker run --device=/dev/nvidia0:/dev/nvidia0 --device=/dev/nvidia1:/dev/nvidia1 \<br />
--device=/dev/nvidiactl:/dev/nvidiactl --device=/dev/nvidia-uvm:/dev/nvidia-uvm \<br />
-v $WDIR:/home indigodatacloudapps/disvis /bin/sh \<br />
-c 'disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/res-gpu' <br />
echo endtime=$(date) <br />
nvidia-smi --query-accounted-apps=pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv<br />
tar cfz res-gpu.tgz res-gpu</pre> <br />
The example input files specified in the jdl script above were taken from the GitHub repo for disvis available from: [https://github.com/haddocking/disvis https://github.com/haddocking/disvis] <br />
<br />
The performance on the GPGPU grid resources is what is expected for the card type. <br />
<br />
The timings were compared to an in-house GPU node at Utrecht University (GTX680 card): <br />
<pre>GPGPU-type Timing[minutes]<br />
GTX680 19<br />
M2090 16<br />
2xK20 12<br />
1xK20 12</pre> <br />
This indicates that DisVis, as expected, is currently incapable of using both the available GPGPUs. However, we plan to use this testbed to parallelize it on multple GPUs, which should be relatively straightforward.<br />
<br />
For running PowerFit just replace disvis.jdl with a powerfit.jdl like e.g.:&nbsp;<br />
<br />
[<br />
executable = "powerfit.sh";<br />
inputSandbox = { "powerfit.sh" ,"1046.map" , "GroES_1gru.pdb" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=2;<br />
]<br />
<br />
with powerfit.sh as [http://pastebin.com/uugj8ZQv here] and input data taken from [http://www.lip.pt/~david/powerfit-example.tgz here] (courtesy of Mario David)<br />
<br />
=== How to run the DisVis on VMs of the Federated Cloud using the enmr.eu VO (updated 14th of July 2016)===<br />
*Scripts are available to instantiate GPGPU-enabled VMs on IISAS (CentOS7 images with Tesla K20m GPUs) and CESNET (Ubuntu images with M2090 GPUs) FedCloud sites, installing latest NVIDIA drivers and DisVis software: [http://pastebin.com/5ueQRerr CESNET-create-VM.sh] and [http://pastebin.com/ARxadbn9 IISAS-create-VM.sh]<br />
* The scripts use the following user_data to contextualise the VMs in order to be ready to install the needed software: [http://pastebin.com/f48qMmsN user_data_ubuntu] and [http://pastebin.com/xCdhkTZr user_data_centos7]. Customise them by inserting your public ssh-key. <br />
*After having run the scripts above (voms-proxy-init -voms enmr.eu -r is required before executing them, from a host with an occi client installed), you have to ssh in the VMs with your ssh-key, then sudo su -, and then execute ./install-gpu-driver.sh (it will also install DisVis software). After that the VM is ready to execute disvis.</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Competence_centre_MoBrain&diff=88748
Competence centre MoBrain
2016-07-14T15:52:23Z
<p>Verlato: /* How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016) */</p>
<hr />
<div>{{Template:EGI-Engage_CC}} <br />
<br />
<br />
{{TOC_right}} <br />
<br />
[[Image:Mobrain.png|300px|right]]<br />
<br />
<br />
'''MoBrain: A Competence Center to Serve Translational Research from Molecule to Brain'''<br />
<br />
<br> <br />
<br> <br />
<br />
'''CC Coordinator:''' Alexandre M.J.J. Bonvin<br />
<br />
'''CC Coordinator deputy:''' Antonio Rosato<br />
<br />
'''CC members' list:''' cc-mobrain AT mailman.egi.eu <br />
<br />
'''CC meetings:''' https://indico.egi.eu/indico/categoryDisplay.py?categId=145 <br />
<br />
<br> <br />
<br />
<br />
== Introduction ==<br />
<br />
Today’s translational research era calls for innovative solutions to enable researchers and clinicians to build bridges between the microscopic (molecular) and macroscopic (human) scales. This requires building both transversal and vertical connections between techniques, researchers and clinicians to provide them with an optimal e-Science toolbox to tackle societal challenges related to health. <br />
<br />
The main objective of the MoBrain Competence Center (CC) is to lower barriers for scientists to access modern e-Science solutions from micro to macro scales. MoBrain builds on grid- and cloud-based infrastructures and on the existing expertise available within WeNMR (www.wenmr.eu), N4U (neugrid4you.eu), and technology providers (NGIs and other institutions, OSG). This initiative aims to serve its user communities, related ESFRI projects (e.g. INSTRUCT) and in the long term the Human Brain Project (FET Flagship), and strengthen the EGI services offering. <br />
<br />
By integrating molecular structural biology and medical imaging services and data, MoBrain will kick-start the development of a larger, integrated, global science virtual research environment for life and brain scientists worldwide. The mini-projects defined in MoBrain are geared toward facilitating this overall objective, each with specific objectives to reinforce existing services, develop new solutions and pave the path to global competence center and virtual research environment for translational research from molecular to brain. <br />
<br />
There are already many services and support/training mechanisms in place that will be further developed, optimized and merged during the operation of the CC, building onto and contributing to the EGI service offering. MoBrain will produce a working environment that will be better tailored to the end user needs than any of its individual components. It will provide an extended portfolio of tools and data in a user-friendly e-laboratory, with a direct relevance for neuroscience, starting from the quantification of the molecular forces, protein folding, biomolecular interactions, drug design and treatments, improved diagnostic and the full characterization of every pathological mechanism of brain diseases through both phenomenological as well as mechanistic approaches. <br />
<br />
<br />
<br />
== MoBrain partners ==<br />
<br />
*Utrecht University, Bijvoet Center for Biomolecular Research, the Netherlands <br />
*Consorzio Interuniversitario Risonanze Magnetiche Di Metalloproteine Paramagnetiche, Florence University, Italy. <br />
*Consejo Superior de Investigaciones Cientificas (and Spanish NGI) <br />
*Science and Technology Facility Council, UK <br />
*Provincia Lombardo Veneta Ordine Ospedaliero di SanGiovanni di Dio – Fatebenefratelli, Italy <br />
*GNUBILA, France <br />
*Istituto Nazionale Di Fisica Nucleare, Italy <br />
*SURFsara (Dutch NGI) <br />
*CESNET (Czech NGI) <br />
*Open Science Grid (US)<br />
<br />
The CC is open for additional members. Please email the CC coordinator to join. <br />
<br />
<br />
== Tasks ==<br />
<br />
<br />
=== T1: Cryo-EM in the cloud: bringing clouds to the data ===<br />
<br />
=== T2: GPU portals for biomolecular simulations ===<br />
<br />
=== T3: Integrating the micro- (WeNMR/INSTRUCT) and macroscopic (NeuGRID4you) virtual research communities ===<br />
<br />
=== T4: User support and training ===<br />
<br />
== Deliverables ==<br />
<br />
* D6.4 Fully integrated MoBrain web portal (OTHER), M12 (02.2016) - by T3<br />
* D6.7 Implementation and evaluation of AMBER and/or GROMACS (R), M13 (03.2016) - by T2<br />
* D6.12 GPGPU-enabled web portal(s) for MoBrain (OTHER), M16 (06.2016) - by T2&3<br />
* D6.14 Scipion cloud deployment for MoBrain (OTHER), M21 (11.2016) - by T1<br />
<br />
<br />
[[Category:EGI-Engage]]<br />
<br />
== Technical documentations ==<br />
<br />
=== How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016)===<br />
<br />
*A docker image with opencl + nvidia drivers + DisVis application and one with PowerFit application ready to run on GPU servers have been prepared for Ubuntu, with the goals of checking performance described [https://drive.google.com/file/d/0B44fOmGpGehGWVVTXzZmU3F1WlE/view here]. <br />
*In collaboration with [https://wiki.egi.eu/wiki/EGI-Engage:TASK_JRA2.4_Accelerated_Computing EGI-Engage task JRA2.4 (Accelerated Computing)] the DisVis and PowerFit docker images have been then re-build to work on the SL6 GPU servers which had a NVIDIA driver 319.x supporting only CUDA 5.5 version. These servers form the grid-enabled cluster at CIRMMP, and successful grid submissions to this cluster using the enmr.eu VO has been carried out with the expected performance the 1st of December.<br />
*'''The 14th of July the CIRMMP servers were updated with the latest NVIDIA driver 352.93 supporting now CUDA 7.5 version. DisVis and PowerFit images used are now the ones maintained by the INDIGO-DataCloud project and distributed via dockerhub at [https://github.com/indigo-dc/ansible-role-disvis-powerfit indigodatacloudapps repository].''' <br />
*Here follows a description of the scripts and commands used to run the test:<br />
<pre>$ voms-proxy-init --voms enmr.eu <br />
$ glite-ce-job-submit -o jobid.txt -a -r cegpu.cerm.unifi.it:8443/cream-pbs-batch disvis.jdl</pre> <br />
where disvis.jdl is:&nbsp; <br />
<br />
[<br />
executable = "disvis.sh";<br />
inputSandbox = { "disvis.sh" ,"O14250.pdb" , "Q9UT97.pdb" , "restraints.dat" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=2;<br />
]<br />
<br />
<br />
and disvis.sh is: <br />
<pre>#!/bin/sh<br />
echo hostname=$(hostname) <br />
echo user=$(id) <br />
export WDIR=`pwd` <br />
mkdir res-gpu<br />
echo docker run disvis... <br />
echo starttime=$(date) <br />
docker run --device=/dev/nvidia0:/dev/nvidia0 --device=/dev/nvidia1:/dev/nvidia1 \<br />
--device=/dev/nvidiactl:/dev/nvidiactl --device=/dev/nvidia-uvm:/dev/nvidia-uvm \<br />
-v $WDIR:/home indigodatacloudapps/disvis /bin/sh \<br />
-c 'disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/res-gpu' <br />
echo endtime=$(date) <br />
nvidia-smi --query-accounted-apps=pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv<br />
tar cfz res-gpu.tgz res-gpu</pre> <br />
The example input files specified in the jdl script above were taken from the GitHub repo for disvis available from: [https://github.com/haddocking/disvis https://github.com/haddocking/disvis] <br />
<br />
The performance on the GPGPU grid resources is what is expected for the card type. <br />
<br />
The timings were compared to an in-house GPU node at Utrecht University (GTX680 card): <br />
<pre>GPGPU-type Timing[minutes]<br />
GTX680 19<br />
M2090 16<br />
2xK20 12<br />
1xK20 12</pre> <br />
This indicates that DisVis, as expected, is currently incapable of using both the available GPGPUs. However, we plan to use this testbed to parallelize it on multple GPUs, which should be relatively straightforward.<br />
<br />
For running PowerFit just replace disvis.jdl with a powerfit.jdl like e.g.:&nbsp;<br />
<br />
[<br />
executable = "powerfit.sh";<br />
inputSandbox = { "powerfit.sh" ,"1046.map" , "GroES_1gru.pdb" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=2;<br />
]<br />
<br />
with powerfit.sh as [http://pastebin.com/uugj8ZQv here] and input data taken from [http://www.lip.pt/~david/powerfit-example.tgz here] (courtesy of Mario David)<br />
<br />
=== How to run the DisVis on VMs of the Federated Cloud using the enmr.eu VO (updated 14th of July 2016)===<br />
*Scripts are available to instantiate GPGPU-enabled VMs on IISAS (with Tesla K20m GPUs) and CESNET (with M2090 GPUs) FedCloud sites, installing latest NVIDIA drivers and DisVis software: [http://pastebin.com/5ueQRerr CESNET-create-VM.sh] and [http://pastebin.com/ARxadbn9 IISAS-create-VM.sh]<br />
* The scripts use the following user_data to contextualise the VMs in order to be ready to install the needed software: [http://pastebin.com/f48qMmsN user_data_ubuntu] and [http://pastebin.com/xCdhkTZr user_data_centos7]. Customise them by inserting your public ssh-key. <br />
*After having run the scripts above (voms-proxy-init -voms enmr.eu -r is required before executing them, from a host with an occi client installed), you have to ssh in the VMs with your ssh-key, then sudo su -, and then execute ./install-gpu-driver.sh (it will also install DisVis software). After that the VM is ready to execute disvis.</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Competence_centre_MoBrain&diff=88747
Competence centre MoBrain
2016-07-14T15:51:35Z
<p>Verlato: /* How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016) */</p>
<hr />
<div>{{Template:EGI-Engage_CC}} <br />
<br />
<br />
{{TOC_right}} <br />
<br />
[[Image:Mobrain.png|300px|right]]<br />
<br />
<br />
'''MoBrain: A Competence Center to Serve Translational Research from Molecule to Brain'''<br />
<br />
<br> <br />
<br> <br />
<br />
'''CC Coordinator:''' Alexandre M.J.J. Bonvin<br />
<br />
'''CC Coordinator deputy:''' Antonio Rosato<br />
<br />
'''CC members' list:''' cc-mobrain AT mailman.egi.eu <br />
<br />
'''CC meetings:''' https://indico.egi.eu/indico/categoryDisplay.py?categId=145 <br />
<br />
<br> <br />
<br />
<br />
== Introduction ==<br />
<br />
Today’s translational research era calls for innovative solutions to enable researchers and clinicians to build bridges between the microscopic (molecular) and macroscopic (human) scales. This requires building both transversal and vertical connections between techniques, researchers and clinicians to provide them with an optimal e-Science toolbox to tackle societal challenges related to health. <br />
<br />
The main objective of the MoBrain Competence Center (CC) is to lower barriers for scientists to access modern e-Science solutions from micro to macro scales. MoBrain builds on grid- and cloud-based infrastructures and on the existing expertise available within WeNMR (www.wenmr.eu), N4U (neugrid4you.eu), and technology providers (NGIs and other institutions, OSG). This initiative aims to serve its user communities, related ESFRI projects (e.g. INSTRUCT) and in the long term the Human Brain Project (FET Flagship), and strengthen the EGI services offering. <br />
<br />
By integrating molecular structural biology and medical imaging services and data, MoBrain will kick-start the development of a larger, integrated, global science virtual research environment for life and brain scientists worldwide. The mini-projects defined in MoBrain are geared toward facilitating this overall objective, each with specific objectives to reinforce existing services, develop new solutions and pave the path to global competence center and virtual research environment for translational research from molecular to brain. <br />
<br />
There are already many services and support/training mechanisms in place that will be further developed, optimized and merged during the operation of the CC, building onto and contributing to the EGI service offering. MoBrain will produce a working environment that will be better tailored to the end user needs than any of its individual components. It will provide an extended portfolio of tools and data in a user-friendly e-laboratory, with a direct relevance for neuroscience, starting from the quantification of the molecular forces, protein folding, biomolecular interactions, drug design and treatments, improved diagnostic and the full characterization of every pathological mechanism of brain diseases through both phenomenological as well as mechanistic approaches. <br />
<br />
<br />
<br />
== MoBrain partners ==<br />
<br />
*Utrecht University, Bijvoet Center for Biomolecular Research, the Netherlands <br />
*Consorzio Interuniversitario Risonanze Magnetiche Di Metalloproteine Paramagnetiche, Florence University, Italy. <br />
*Consejo Superior de Investigaciones Cientificas (and Spanish NGI) <br />
*Science and Technology Facility Council, UK <br />
*Provincia Lombardo Veneta Ordine Ospedaliero di SanGiovanni di Dio – Fatebenefratelli, Italy <br />
*GNUBILA, France <br />
*Istituto Nazionale Di Fisica Nucleare, Italy <br />
*SURFsara (Dutch NGI) <br />
*CESNET (Czech NGI) <br />
*Open Science Grid (US)<br />
<br />
The CC is open for additional members. Please email the CC coordinator to join. <br />
<br />
<br />
== Tasks ==<br />
<br />
<br />
=== T1: Cryo-EM in the cloud: bringing clouds to the data ===<br />
<br />
=== T2: GPU portals for biomolecular simulations ===<br />
<br />
=== T3: Integrating the micro- (WeNMR/INSTRUCT) and macroscopic (NeuGRID4you) virtual research communities ===<br />
<br />
=== T4: User support and training ===<br />
<br />
== Deliverables ==<br />
<br />
* D6.4 Fully integrated MoBrain web portal (OTHER), M12 (02.2016) - by T3<br />
* D6.7 Implementation and evaluation of AMBER and/or GROMACS (R), M13 (03.2016) - by T2<br />
* D6.12 GPGPU-enabled web portal(s) for MoBrain (OTHER), M16 (06.2016) - by T2&3<br />
* D6.14 Scipion cloud deployment for MoBrain (OTHER), M21 (11.2016) - by T1<br />
<br />
<br />
[[Category:EGI-Engage]]<br />
<br />
== Technical documentations ==<br />
<br />
=== How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016)===<br />
<br />
*A docker image with opencl + nvidia drivers + DisVis application and one with PowerFit application ready to run on GPU servers have been prepared for Ubuntu, with the goals of checking performance described [https://drive.google.com/file/d/0B44fOmGpGehGWVVTXzZmU3F1WlE/view here]. <br />
*In collaboration with [https://wiki.egi.eu/wiki/EGI-Engage:TASK_JRA2.4_Accelerated_Computing EGI-Engage task JRA2.4 (Accelerated Computing)] the DisVis and PowerFit docker images have been then re-build to work on the SL6 GPU servers which had a NVIDIA driver 319.x supporting only CUDA 5.5 version. These servers form the grid-enabled cluster at CIRMMP, and successful grid submissions to this cluster using the enmr.eu VO has been carried out with the expected performance the 1st of December.<br />
*'''The 14th of July the CIRMMP servers were updated with the latest NVIDIA driver 352.93 supporting now CUDA 7.5 version. DisVis and PowerFit images used are now the ones maintained by the INDIGO-DataCloud project and distributed via dockerhub at [https://github.com/indigo-dc/ansible-role-disvis-powerfit indigodatacloudapps repository].''' <br />
*Here follows a description of the scripts and commands used to run the test:<br />
<pre>$ voms-proxy-init --voms enmr.eu <br />
$ glite-ce-job-submit -o jobid.txt -a -r cegpu.cerm.unifi.it:8443/cream-pbs-batch disvis.jdl</pre> <br />
where disvis.jdl is:&nbsp; <br />
<br />
[<br />
executable = "disvis.sh";<br />
inputSandbox = { "disvis.sh" ,"O14250.pdb" , "Q9UT97.pdb" , "restraints.dat" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=2;<br />
]<br />
<br />
<br />
and disvis.sh is: <br />
<pre>#!/bin/sh<br />
echo hostname=$(hostname) <br />
echo user=$(id) <br />
export WDIR=`pwd` <br />
mkdir res-gpu<br />
echo docker run disvis... <br />
echo starttime=$(date) <br />
docker run --device=/dev/nvidia0:/dev/nvidia0 --device=/dev/nvidia1:/dev/nvidia1 \<br />
--device=/dev/nvidiactl:/dev/nvidiactl --device=/dev/nvidia-uvm:/dev/nvidia-uvm \<br />
-v $WDIR:/home indigodatacloudapps/disvis /bin/sh \<br />
-c 'disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/res-gpu' <br />
echo endtime=$(date) <br />
nvidia-smi --query-accounted-apps=pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv<br />
tar cfz res-gpu.tgz res-gpu</pre> <br />
The example input files specified in the jdl script above were taken from the GitHub repo for disvis available from: [https://github.com/haddocking/disvis https://github.com/haddocking/disvis] <br />
<br />
The performance on the GPGPU grid resources is what is expected for the card type. <br />
<br />
The timings were compared to an in-house GPU node at Utrecht University (GTX680 card): <br />
<pre>GPGPU-type Timing[minutes]<br />
GTX680 19<br />
2xK20 12<br />
1xK20 12<br />
1xM2090 16</pre> <br />
This indicates that DisVis, as expected, is currently incapable of using both the available GPGPUs. However, we plan to use this testbed to parallelize it on multple GPUs, which should be relatively straightforward.<br />
<br />
For running PowerFit just replace disvis.jdl with a powerfit.jdl like e.g.:&nbsp;<br />
<br />
[<br />
executable = "powerfit.sh";<br />
inputSandbox = { "powerfit.sh" ,"1046.map" , "GroES_1gru.pdb" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=2;<br />
]<br />
<br />
with powerfit.sh as [http://pastebin.com/uugj8ZQv here] and input data taken from [http://www.lip.pt/~david/powerfit-example.tgz here] (courtesy of Mario David)<br />
<br />
=== How to run the DisVis on VMs of the Federated Cloud using the enmr.eu VO (updated 14th of July 2016)===<br />
*Scripts are available to instantiate GPGPU-enabled VMs on IISAS (with Tesla K20m GPUs) and CESNET (with M2090 GPUs) FedCloud sites, installing latest NVIDIA drivers and DisVis software: [http://pastebin.com/5ueQRerr CESNET-create-VM.sh] and [http://pastebin.com/ARxadbn9 IISAS-create-VM.sh]<br />
* The scripts use the following user_data to contextualise the VMs in order to be ready to install the needed software: [http://pastebin.com/f48qMmsN user_data_ubuntu] and [http://pastebin.com/xCdhkTZr user_data_centos7]. Customise them by inserting your public ssh-key. <br />
*After having run the scripts above (voms-proxy-init -voms enmr.eu -r is required before executing them, from a host with an occi client installed), you have to ssh in the VMs with your ssh-key, then sudo su -, and then execute ./install-gpu-driver.sh (it will also install DisVis software). After that the VM is ready to execute disvis.</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Competence_centre_MoBrain&diff=88743
Competence centre MoBrain
2016-07-14T15:32:24Z
<p>Verlato: /* How to run the DisVis on VMs of the Federated Cloud using the enmr.eu VO (updated 14th of July 2016) */</p>
<hr />
<div>{{Template:EGI-Engage_CC}} <br />
<br />
<br />
{{TOC_right}} <br />
<br />
[[Image:Mobrain.png|300px|right]]<br />
<br />
<br />
'''MoBrain: A Competence Center to Serve Translational Research from Molecule to Brain'''<br />
<br />
<br> <br />
<br> <br />
<br />
'''CC Coordinator:''' Alexandre M.J.J. Bonvin<br />
<br />
'''CC Coordinator deputy:''' Antonio Rosato<br />
<br />
'''CC members' list:''' cc-mobrain AT mailman.egi.eu <br />
<br />
'''CC meetings:''' https://indico.egi.eu/indico/categoryDisplay.py?categId=145 <br />
<br />
<br> <br />
<br />
<br />
== Introduction ==<br />
<br />
Today’s translational research era calls for innovative solutions to enable researchers and clinicians to build bridges between the microscopic (molecular) and macroscopic (human) scales. This requires building both transversal and vertical connections between techniques, researchers and clinicians to provide them with an optimal e-Science toolbox to tackle societal challenges related to health. <br />
<br />
The main objective of the MoBrain Competence Center (CC) is to lower barriers for scientists to access modern e-Science solutions from micro to macro scales. MoBrain builds on grid- and cloud-based infrastructures and on the existing expertise available within WeNMR (www.wenmr.eu), N4U (neugrid4you.eu), and technology providers (NGIs and other institutions, OSG). This initiative aims to serve its user communities, related ESFRI projects (e.g. INSTRUCT) and in the long term the Human Brain Project (FET Flagship), and strengthen the EGI services offering. <br />
<br />
By integrating molecular structural biology and medical imaging services and data, MoBrain will kick-start the development of a larger, integrated, global science virtual research environment for life and brain scientists worldwide. The mini-projects defined in MoBrain are geared toward facilitating this overall objective, each with specific objectives to reinforce existing services, develop new solutions and pave the path to global competence center and virtual research environment for translational research from molecular to brain. <br />
<br />
There are already many services and support/training mechanisms in place that will be further developed, optimized and merged during the operation of the CC, building onto and contributing to the EGI service offering. MoBrain will produce a working environment that will be better tailored to the end user needs than any of its individual components. It will provide an extended portfolio of tools and data in a user-friendly e-laboratory, with a direct relevance for neuroscience, starting from the quantification of the molecular forces, protein folding, biomolecular interactions, drug design and treatments, improved diagnostic and the full characterization of every pathological mechanism of brain diseases through both phenomenological as well as mechanistic approaches. <br />
<br />
<br />
<br />
== MoBrain partners ==<br />
<br />
*Utrecht University, Bijvoet Center for Biomolecular Research, the Netherlands <br />
*Consorzio Interuniversitario Risonanze Magnetiche Di Metalloproteine Paramagnetiche, Florence University, Italy. <br />
*Consejo Superior de Investigaciones Cientificas (and Spanish NGI) <br />
*Science and Technology Facility Council, UK <br />
*Provincia Lombardo Veneta Ordine Ospedaliero di SanGiovanni di Dio – Fatebenefratelli, Italy <br />
*GNUBILA, France <br />
*Istituto Nazionale Di Fisica Nucleare, Italy <br />
*SURFsara (Dutch NGI) <br />
*CESNET (Czech NGI) <br />
*Open Science Grid (US)<br />
<br />
The CC is open for additional members. Please email the CC coordinator to join. <br />
<br />
<br />
== Tasks ==<br />
<br />
<br />
=== T1: Cryo-EM in the cloud: bringing clouds to the data ===<br />
<br />
=== T2: GPU portals for biomolecular simulations ===<br />
<br />
=== T3: Integrating the micro- (WeNMR/INSTRUCT) and macroscopic (NeuGRID4you) virtual research communities ===<br />
<br />
=== T4: User support and training ===<br />
<br />
== Deliverables ==<br />
<br />
* D6.4 Fully integrated MoBrain web portal (OTHER), M12 (02.2016) - by T3<br />
* D6.7 Implementation and evaluation of AMBER and/or GROMACS (R), M13 (03.2016) - by T2<br />
* D6.12 GPGPU-enabled web portal(s) for MoBrain (OTHER), M16 (06.2016) - by T2&3<br />
* D6.14 Scipion cloud deployment for MoBrain (OTHER), M21 (11.2016) - by T1<br />
<br />
<br />
[[Category:EGI-Engage]]<br />
<br />
== Technical documentations ==<br />
<br />
=== How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016)===<br />
<br />
*A docker image with opencl + nvidia drivers + DisVis application and one with PowerFit application ready to run on GPU servers have been prepared for Ubuntu, with the goals of checking performance described [https://drive.google.com/file/d/0B44fOmGpGehGWVVTXzZmU3F1WlE/view here]. <br />
*In collaboration with [https://wiki.egi.eu/wiki/EGI-Engage:TASK_JRA2.4_Accelerated_Computing EGI-Engage task JRA2.4 (Accelerated Computing)] the DisVis and PowerFit docker images have been then re-build to work on the SL6 GPU servers which had a NVIDIA driver 319.x supporting only CUDA 5.5 version. These servers form the grid-enabled cluster at CIRMMP, and successful grid submissions to this cluster using the enmr.eu VO has been carried out with the expected performance the 1st of December.<br />
*'''The 14th of July the CIRMMP servers were updated with the latest NVIDIA driver 352.93 supporting now CUDA 7.5 version. DisVis and PowerFit images used are now the ones maintained by the INDIGO-DataCloud project and distributed via dockerhub at [https://github.com/indigo-dc/ansible-role-disvis-powerfit indigodatacloudapps repository].''' <br />
*Here follows a description of the scripts and commands used to run the test:<br />
<pre>$ voms-proxy-init --voms enmr.eu <br />
$ glite-ce-job-submit -o jobid.txt -a -r cegpu.cerm.unifi.it:8443/cream-pbs-batch disvis.jdl</pre> <br />
where disvis.jdl is:&nbsp; <br />
<br />
[<br />
executable = "disvis.sh";<br />
inputSandbox = { "disvis.sh" ,"O14250.pdb" , "Q9UT97.pdb" , "restraints.dat" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=2;<br />
]<br />
<br />
<br />
and disvis.sh is: <br />
<pre>#!/bin/sh<br />
echo hostname=$(hostname) <br />
echo user=$(id) <br />
export WDIR=`pwd` <br />
mkdir res-gpu<br />
echo docker run disvis... <br />
echo starttime=$(date) <br />
docker run --device=/dev/nvidia0:/dev/nvidia0 --device=/dev/nvidia1:/dev/nvidia1 \<br />
--device=/dev/nvidiactl:/dev/nvidiactl --device=/dev/nvidia-uvm:/dev/nvidia-uvm \<br />
-v $WDIR:/home indigodatacloudapps/disvis /bin/sh \<br />
-c 'disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/res-gpu' <br />
echo endtime=$(date) <br />
nvidia-smi --query-accounted-apps=pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv<br />
tar cfz res-gpu.tgz res-gpu</pre> <br />
The example input files specified in the jdl script above were taken from the GitHub repo for disvis available from: [https://github.com/haddocking/disvis https://github.com/haddocking/disvis] <br />
<br />
The performance on the GPGPU grid resources is what is expected for the card type. <br />
<br />
The timings were compared to an in-house GPU node at Utrecht University (GTX680 card): <br />
<pre>GPGPU-type Timing[minutes]<br />
GTX680 19<br />
2xK20 12<br />
1xK20 12</pre> <br />
This indicates that DisVis, as expected, is currently incapable of using both the available GPGPUs. However, we plan to use this testbed to parallelize it on multple GPUs, which should be relatively straightforward.<br />
<br />
For running PowerFit just replace disvis.jdl with a powerfit.jdl like e.g.:&nbsp;<br />
<br />
[<br />
executable = "powerfit.sh";<br />
inputSandbox = { "powerfit.sh" ,"1046.map" , "GroES_1gru.pdb" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=2;<br />
]<br />
<br />
with powerfit.sh as [http://pastebin.com/uugj8ZQv here] and input data taken from [http://www.lip.pt/~david/powerfit-example.tgz here] (courtesy of Mario David)<br />
<br />
=== How to run the DisVis on VMs of the Federated Cloud using the enmr.eu VO (updated 14th of July 2016)===<br />
*Scripts are available to instantiate GPGPU-enabled VMs on IISAS (with Tesla K20m GPUs) and CESNET (with M2090 GPUs) FedCloud sites, installing latest NVIDIA drivers and DisVis software: [http://pastebin.com/5ueQRerr CESNET-create-VM.sh] and [http://pastebin.com/ARxadbn9 IISAS-create-VM.sh]<br />
* The scripts use the following user_data to contextualise the VMs in order to be ready to install the needed software: [http://pastebin.com/f48qMmsN user_data_ubuntu] and [http://pastebin.com/xCdhkTZr user_data_centos7]. Customise them by inserting your public ssh-key. <br />
*After having run the scripts above (voms-proxy-init -voms enmr.eu -r is required before executing them, from a host with an occi client installed), you have to ssh in the VMs with your ssh-key, then sudo su -, and then execute ./install-gpu-driver.sh (it will also install DisVis software). After that the VM is ready to execute disvis.</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Competence_centre_MoBrain&diff=88742
Competence centre MoBrain
2016-07-14T15:28:44Z
<p>Verlato: /* How to run the DisVis on VMs of the Federated Cloud using the enmr.eu VO (updated 14th of July 2016) */</p>
<hr />
<div>{{Template:EGI-Engage_CC}} <br />
<br />
<br />
{{TOC_right}} <br />
<br />
[[Image:Mobrain.png|300px|right]]<br />
<br />
<br />
'''MoBrain: A Competence Center to Serve Translational Research from Molecule to Brain'''<br />
<br />
<br> <br />
<br> <br />
<br />
'''CC Coordinator:''' Alexandre M.J.J. Bonvin<br />
<br />
'''CC Coordinator deputy:''' Antonio Rosato<br />
<br />
'''CC members' list:''' cc-mobrain AT mailman.egi.eu <br />
<br />
'''CC meetings:''' https://indico.egi.eu/indico/categoryDisplay.py?categId=145 <br />
<br />
<br> <br />
<br />
<br />
== Introduction ==<br />
<br />
Today’s translational research era calls for innovative solutions to enable researchers and clinicians to build bridges between the microscopic (molecular) and macroscopic (human) scales. This requires building both transversal and vertical connections between techniques, researchers and clinicians to provide them with an optimal e-Science toolbox to tackle societal challenges related to health. <br />
<br />
The main objective of the MoBrain Competence Center (CC) is to lower barriers for scientists to access modern e-Science solutions from micro to macro scales. MoBrain builds on grid- and cloud-based infrastructures and on the existing expertise available within WeNMR (www.wenmr.eu), N4U (neugrid4you.eu), and technology providers (NGIs and other institutions, OSG). This initiative aims to serve its user communities, related ESFRI projects (e.g. INSTRUCT) and in the long term the Human Brain Project (FET Flagship), and strengthen the EGI services offering. <br />
<br />
By integrating molecular structural biology and medical imaging services and data, MoBrain will kick-start the development of a larger, integrated, global science virtual research environment for life and brain scientists worldwide. The mini-projects defined in MoBrain are geared toward facilitating this overall objective, each with specific objectives to reinforce existing services, develop new solutions and pave the path to global competence center and virtual research environment for translational research from molecular to brain. <br />
<br />
There are already many services and support/training mechanisms in place that will be further developed, optimized and merged during the operation of the CC, building onto and contributing to the EGI service offering. MoBrain will produce a working environment that will be better tailored to the end user needs than any of its individual components. It will provide an extended portfolio of tools and data in a user-friendly e-laboratory, with a direct relevance for neuroscience, starting from the quantification of the molecular forces, protein folding, biomolecular interactions, drug design and treatments, improved diagnostic and the full characterization of every pathological mechanism of brain diseases through both phenomenological as well as mechanistic approaches. <br />
<br />
<br />
<br />
== MoBrain partners ==<br />
<br />
*Utrecht University, Bijvoet Center for Biomolecular Research, the Netherlands <br />
*Consorzio Interuniversitario Risonanze Magnetiche Di Metalloproteine Paramagnetiche, Florence University, Italy. <br />
*Consejo Superior de Investigaciones Cientificas (and Spanish NGI) <br />
*Science and Technology Facility Council, UK <br />
*Provincia Lombardo Veneta Ordine Ospedaliero di SanGiovanni di Dio – Fatebenefratelli, Italy <br />
*GNUBILA, France <br />
*Istituto Nazionale Di Fisica Nucleare, Italy <br />
*SURFsara (Dutch NGI) <br />
*CESNET (Czech NGI) <br />
*Open Science Grid (US)<br />
<br />
The CC is open for additional members. Please email the CC coordinator to join. <br />
<br />
<br />
== Tasks ==<br />
<br />
<br />
=== T1: Cryo-EM in the cloud: bringing clouds to the data ===<br />
<br />
=== T2: GPU portals for biomolecular simulations ===<br />
<br />
=== T3: Integrating the micro- (WeNMR/INSTRUCT) and macroscopic (NeuGRID4you) virtual research communities ===<br />
<br />
=== T4: User support and training ===<br />
<br />
== Deliverables ==<br />
<br />
* D6.4 Fully integrated MoBrain web portal (OTHER), M12 (02.2016) - by T3<br />
* D6.7 Implementation and evaluation of AMBER and/or GROMACS (R), M13 (03.2016) - by T2<br />
* D6.12 GPGPU-enabled web portal(s) for MoBrain (OTHER), M16 (06.2016) - by T2&3<br />
* D6.14 Scipion cloud deployment for MoBrain (OTHER), M21 (11.2016) - by T1<br />
<br />
<br />
[[Category:EGI-Engage]]<br />
<br />
== Technical documentations ==<br />
<br />
=== How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016)===<br />
<br />
*A docker image with opencl + nvidia drivers + DisVis application and one with PowerFit application ready to run on GPU servers have been prepared for Ubuntu, with the goals of checking performance described [https://drive.google.com/file/d/0B44fOmGpGehGWVVTXzZmU3F1WlE/view here]. <br />
*In collaboration with [https://wiki.egi.eu/wiki/EGI-Engage:TASK_JRA2.4_Accelerated_Computing EGI-Engage task JRA2.4 (Accelerated Computing)] the DisVis and PowerFit docker images have been then re-build to work on the SL6 GPU servers which had a NVIDIA driver 319.x supporting only CUDA 5.5 version. These servers form the grid-enabled cluster at CIRMMP, and successful grid submissions to this cluster using the enmr.eu VO has been carried out with the expected performance the 1st of December.<br />
*'''The 14th of July the CIRMMP servers were updated with the latest NVIDIA driver 352.93 supporting now CUDA 7.5 version. DisVis and PowerFit images used are now the ones maintained by the INDIGO-DataCloud project and distributed via dockerhub at [https://github.com/indigo-dc/ansible-role-disvis-powerfit indigodatacloudapps repository].''' <br />
*Here follows a description of the scripts and commands used to run the test:<br />
<pre>$ voms-proxy-init --voms enmr.eu <br />
$ glite-ce-job-submit -o jobid.txt -a -r cegpu.cerm.unifi.it:8443/cream-pbs-batch disvis.jdl</pre> <br />
where disvis.jdl is:&nbsp; <br />
<br />
[<br />
executable = "disvis.sh";<br />
inputSandbox = { "disvis.sh" ,"O14250.pdb" , "Q9UT97.pdb" , "restraints.dat" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=2;<br />
]<br />
<br />
<br />
and disvis.sh is: <br />
<pre>#!/bin/sh<br />
echo hostname=$(hostname) <br />
echo user=$(id) <br />
export WDIR=`pwd` <br />
mkdir res-gpu<br />
echo docker run disvis... <br />
echo starttime=$(date) <br />
docker run --device=/dev/nvidia0:/dev/nvidia0 --device=/dev/nvidia1:/dev/nvidia1 \<br />
--device=/dev/nvidiactl:/dev/nvidiactl --device=/dev/nvidia-uvm:/dev/nvidia-uvm \<br />
-v $WDIR:/home indigodatacloudapps/disvis /bin/sh \<br />
-c 'disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/res-gpu' <br />
echo endtime=$(date) <br />
nvidia-smi --query-accounted-apps=pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv<br />
tar cfz res-gpu.tgz res-gpu</pre> <br />
The example input files specified in the jdl script above were taken from the GitHub repo for disvis available from: [https://github.com/haddocking/disvis https://github.com/haddocking/disvis] <br />
<br />
The performance on the GPGPU grid resources is what is expected for the card type. <br />
<br />
The timings were compared to an in-house GPU node at Utrecht University (GTX680 card): <br />
<pre>GPGPU-type Timing[minutes]<br />
GTX680 19<br />
2xK20 12<br />
1xK20 12</pre> <br />
This indicates that DisVis, as expected, is currently incapable of using both the available GPGPUs. However, we plan to use this testbed to parallelize it on multple GPUs, which should be relatively straightforward.<br />
<br />
For running PowerFit just replace disvis.jdl with a powerfit.jdl like e.g.:&nbsp;<br />
<br />
[<br />
executable = "powerfit.sh";<br />
inputSandbox = { "powerfit.sh" ,"1046.map" , "GroES_1gru.pdb" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=2;<br />
]<br />
<br />
with powerfit.sh as [http://pastebin.com/uugj8ZQv here] and input data taken from [http://www.lip.pt/~david/powerfit-example.tgz here] (courtesy of Mario David)<br />
<br />
=== How to run the DisVis on VMs of the Federated Cloud using the enmr.eu VO (updated 14th of July 2016)===<br />
*Scripts are available to instantiate GPGPU-enabled VMs on IISAS and CESNET FedCloud sites, installing latest NVIDIA drivers and DisVis software: [http://pastebin.com/5ueQRerr CESNET-create-VM.sh] and [http://pastebin.com/ARxadbn9 IISAS-create-VM.sh]<br />
* The scripts use the following user_data to contextualise the VMs in order to be ready to install the needed software: [http://pastebin.com/f48qMmsN user_data_ubuntu] and [http://pastebin.com/xCdhkTZr user_data_centos7]. Customise them by inserting your public ssh-key. <br />
*After having run the scripts above (voms-proxy-init -voms enmr.eu -r is required before executing them, from a host with an occi client installed), you have to ssh in the VMs with your ssh-key, then sudo su -, and then execute ./install-gpu-driver.sh (it will also install DisVis software). After that the VM is ready to execute disvis.</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Competence_centre_MoBrain&diff=88741
Competence centre MoBrain
2016-07-14T14:47:44Z
<p>Verlato: </p>
<hr />
<div>{{Template:EGI-Engage_CC}} <br />
<br />
<br />
{{TOC_right}} <br />
<br />
[[Image:Mobrain.png|300px|right]]<br />
<br />
<br />
'''MoBrain: A Competence Center to Serve Translational Research from Molecule to Brain'''<br />
<br />
<br> <br />
<br> <br />
<br />
'''CC Coordinator:''' Alexandre M.J.J. Bonvin<br />
<br />
'''CC Coordinator deputy:''' Antonio Rosato<br />
<br />
'''CC members' list:''' cc-mobrain AT mailman.egi.eu <br />
<br />
'''CC meetings:''' https://indico.egi.eu/indico/categoryDisplay.py?categId=145 <br />
<br />
<br> <br />
<br />
<br />
== Introduction ==<br />
<br />
Today’s translational research era calls for innovative solutions to enable researchers and clinicians to build bridges between the microscopic (molecular) and macroscopic (human) scales. This requires building both transversal and vertical connections between techniques, researchers and clinicians to provide them with an optimal e-Science toolbox to tackle societal challenges related to health. <br />
<br />
The main objective of the MoBrain Competence Center (CC) is to lower barriers for scientists to access modern e-Science solutions from micro to macro scales. MoBrain builds on grid- and cloud-based infrastructures and on the existing expertise available within WeNMR (www.wenmr.eu), N4U (neugrid4you.eu), and technology providers (NGIs and other institutions, OSG). This initiative aims to serve its user communities, related ESFRI projects (e.g. INSTRUCT) and in the long term the Human Brain Project (FET Flagship), and strengthen the EGI services offering. <br />
<br />
By integrating molecular structural biology and medical imaging services and data, MoBrain will kick-start the development of a larger, integrated, global science virtual research environment for life and brain scientists worldwide. The mini-projects defined in MoBrain are geared toward facilitating this overall objective, each with specific objectives to reinforce existing services, develop new solutions and pave the path to global competence center and virtual research environment for translational research from molecular to brain. <br />
<br />
There are already many services and support/training mechanisms in place that will be further developed, optimized and merged during the operation of the CC, building onto and contributing to the EGI service offering. MoBrain will produce a working environment that will be better tailored to the end user needs than any of its individual components. It will provide an extended portfolio of tools and data in a user-friendly e-laboratory, with a direct relevance for neuroscience, starting from the quantification of the molecular forces, protein folding, biomolecular interactions, drug design and treatments, improved diagnostic and the full characterization of every pathological mechanism of brain diseases through both phenomenological as well as mechanistic approaches. <br />
<br />
<br />
<br />
== MoBrain partners ==<br />
<br />
*Utrecht University, Bijvoet Center for Biomolecular Research, the Netherlands <br />
*Consorzio Interuniversitario Risonanze Magnetiche Di Metalloproteine Paramagnetiche, Florence University, Italy. <br />
*Consejo Superior de Investigaciones Cientificas (and Spanish NGI) <br />
*Science and Technology Facility Council, UK <br />
*Provincia Lombardo Veneta Ordine Ospedaliero di SanGiovanni di Dio – Fatebenefratelli, Italy <br />
*GNUBILA, France <br />
*Istituto Nazionale Di Fisica Nucleare, Italy <br />
*SURFsara (Dutch NGI) <br />
*CESNET (Czech NGI) <br />
*Open Science Grid (US)<br />
<br />
The CC is open for additional members. Please email the CC coordinator to join. <br />
<br />
<br />
== Tasks ==<br />
<br />
<br />
=== T1: Cryo-EM in the cloud: bringing clouds to the data ===<br />
<br />
=== T2: GPU portals for biomolecular simulations ===<br />
<br />
=== T3: Integrating the micro- (WeNMR/INSTRUCT) and macroscopic (NeuGRID4you) virtual research communities ===<br />
<br />
=== T4: User support and training ===<br />
<br />
== Deliverables ==<br />
<br />
* D6.4 Fully integrated MoBrain web portal (OTHER), M12 (02.2016) - by T3<br />
* D6.7 Implementation and evaluation of AMBER and/or GROMACS (R), M13 (03.2016) - by T2<br />
* D6.12 GPGPU-enabled web portal(s) for MoBrain (OTHER), M16 (06.2016) - by T2&3<br />
* D6.14 Scipion cloud deployment for MoBrain (OTHER), M21 (11.2016) - by T1<br />
<br />
<br />
[[Category:EGI-Engage]]<br />
<br />
== Technical documentations ==<br />
<br />
=== How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016)===<br />
<br />
*A docker image with opencl + nvidia drivers + DisVis application and one with PowerFit application ready to run on GPU servers have been prepared for Ubuntu, with the goals of checking performance described [https://drive.google.com/file/d/0B44fOmGpGehGWVVTXzZmU3F1WlE/view here]. <br />
*In collaboration with [https://wiki.egi.eu/wiki/EGI-Engage:TASK_JRA2.4_Accelerated_Computing EGI-Engage task JRA2.4 (Accelerated Computing)] the DisVis and PowerFit docker images have been then re-build to work on the SL6 GPU servers which had a NVIDIA driver 319.x supporting only CUDA 5.5 version. These servers form the grid-enabled cluster at CIRMMP, and successful grid submissions to this cluster using the enmr.eu VO has been carried out with the expected performance the 1st of December.<br />
*'''The 14th of July the CIRMMP servers were updated with the latest NVIDIA driver 352.93 supporting now CUDA 7.5 version. DisVis and PowerFit images used are now the ones maintained by the INDIGO-DataCloud project and distributed via dockerhub at [https://github.com/indigo-dc/ansible-role-disvis-powerfit indigodatacloudapps repository].''' <br />
*Here follows a description of the scripts and commands used to run the test:<br />
<pre>$ voms-proxy-init --voms enmr.eu <br />
$ glite-ce-job-submit -o jobid.txt -a -r cegpu.cerm.unifi.it:8443/cream-pbs-batch disvis.jdl</pre> <br />
where disvis.jdl is:&nbsp; <br />
<br />
[<br />
executable = "disvis.sh";<br />
inputSandbox = { "disvis.sh" ,"O14250.pdb" , "Q9UT97.pdb" , "restraints.dat" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=2;<br />
]<br />
<br />
<br />
and disvis.sh is: <br />
<pre>#!/bin/sh<br />
echo hostname=$(hostname) <br />
echo user=$(id) <br />
export WDIR=`pwd` <br />
mkdir res-gpu<br />
echo docker run disvis... <br />
echo starttime=$(date) <br />
docker run --device=/dev/nvidia0:/dev/nvidia0 --device=/dev/nvidia1:/dev/nvidia1 \<br />
--device=/dev/nvidiactl:/dev/nvidiactl --device=/dev/nvidia-uvm:/dev/nvidia-uvm \<br />
-v $WDIR:/home indigodatacloudapps/disvis /bin/sh \<br />
-c 'disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/res-gpu' <br />
echo endtime=$(date) <br />
nvidia-smi --query-accounted-apps=pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv<br />
tar cfz res-gpu.tgz res-gpu</pre> <br />
The example input files specified in the jdl script above were taken from the GitHub repo for disvis available from: [https://github.com/haddocking/disvis https://github.com/haddocking/disvis] <br />
<br />
The performance on the GPGPU grid resources is what is expected for the card type. <br />
<br />
The timings were compared to an in-house GPU node at Utrecht University (GTX680 card): <br />
<pre>GPGPU-type Timing[minutes]<br />
GTX680 19<br />
2xK20 12<br />
1xK20 12</pre> <br />
This indicates that DisVis, as expected, is currently incapable of using both the available GPGPUs. However, we plan to use this testbed to parallelize it on multple GPUs, which should be relatively straightforward.<br />
<br />
For running PowerFit just replace disvis.jdl with a powerfit.jdl like e.g.:&nbsp;<br />
<br />
[<br />
executable = "powerfit.sh";<br />
inputSandbox = { "powerfit.sh" ,"1046.map" , "GroES_1gru.pdb" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=2;<br />
]<br />
<br />
with powerfit.sh as [http://pastebin.com/uugj8ZQv here] and input data taken from [http://www.lip.pt/~david/powerfit-example.tgz here] (courtesy of Mario David)<br />
<br />
=== How to run the DisVis on VMs of the Federated Cloud using the enmr.eu VO (updated 14th of July 2016)===<br />
*Scripts are available to instantiate GPGPU-enabled VMs on IISAS and CESNET FedCloud sites, installing latest NVIDIA drivers and DisVis software: [http://pastebin.com/5ueQRerr CESNET-create-VM.sh] and [http://pastebin.com/ARxadbn9 IISAS-create-VM.sh]<br />
*After having run the scripts above (voms-proxy-init -voms enmr.eu -r is required before executing them, from a host with an occi client installed), you have to ssh in the VMs with your ssh-key, then sudo su -, and then execute ./install-gpu-driver.sh (it will also install DisVis software). After that the VM is ready to execute disvis.</div>
Verlato
https://wiki.egi.eu/w/index.php?title=Competence_centre_MoBrain&diff=88724
Competence centre MoBrain
2016-07-14T13:50:13Z
<p>Verlato: /* How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016) */</p>
<hr />
<div>{{Template:EGI-Engage_CC}} <br />
<br />
<br />
{{TOC_right}} <br />
<br />
[[Image:Mobrain.png|300px|right]]<br />
<br />
<br />
'''MoBrain: A Competence Center to Serve Translational Research from Molecule to Brain'''<br />
<br />
<br> <br />
<br> <br />
<br />
'''CC Coordinator:''' Alexandre M.J.J. Bonvin<br />
<br />
'''CC Coordinator deputy:''' Antonio Rosato<br />
<br />
'''CC members' list:''' cc-mobrain AT mailman.egi.eu <br />
<br />
'''CC meetings:''' https://indico.egi.eu/indico/categoryDisplay.py?categId=145 <br />
<br />
<br> <br />
<br />
<br />
== Introduction ==<br />
<br />
Today’s translational research era calls for innovative solutions to enable researchers and clinicians to build bridges between the microscopic (molecular) and macroscopic (human) scales. This requires building both transversal and vertical connections between techniques, researchers and clinicians to provide them with an optimal e-Science toolbox to tackle societal challenges related to health. <br />
<br />
The main objective of the MoBrain Competence Center (CC) is to lower barriers for scientists to access modern e-Science solutions from micro to macro scales. MoBrain builds on grid- and cloud-based infrastructures and on the existing expertise available within WeNMR (www.wenmr.eu), N4U (neugrid4you.eu), and technology providers (NGIs and other institutions, OSG). This initiative aims to serve its user communities, related ESFRI projects (e.g. INSTRUCT) and in the long term the Human Brain Project (FET Flagship), and strengthen the EGI services offering. <br />
<br />
By integrating molecular structural biology and medical imaging services and data, MoBrain will kick-start the development of a larger, integrated, global science virtual research environment for life and brain scientists worldwide. The mini-projects defined in MoBrain are geared toward facilitating this overall objective, each with specific objectives to reinforce existing services, develop new solutions and pave the path to global competence center and virtual research environment for translational research from molecular to brain. <br />
<br />
There are already many services and support/training mechanisms in place that will be further developed, optimized and merged during the operation of the CC, building onto and contributing to the EGI service offering. MoBrain will produce a working environment that will be better tailored to the end user needs than any of its individual components. It will provide an extended portfolio of tools and data in a user-friendly e-laboratory, with a direct relevance for neuroscience, starting from the quantification of the molecular forces, protein folding, biomolecular interactions, drug design and treatments, improved diagnostic and the full characterization of every pathological mechanism of brain diseases through both phenomenological as well as mechanistic approaches. <br />
<br />
<br />
<br />
== MoBrain partners ==<br />
<br />
*Utrecht University, Bijvoet Center for Biomolecular Research, the Netherlands <br />
*Consorzio Interuniversitario Risonanze Magnetiche Di Metalloproteine Paramagnetiche, Florence University, Italy. <br />
*Consejo Superior de Investigaciones Cientificas (and Spanish NGI) <br />
*Science and Technology Facility Council, UK <br />
*Provincia Lombardo Veneta Ordine Ospedaliero di SanGiovanni di Dio – Fatebenefratelli, Italy <br />
*GNUBILA, France <br />
*Istituto Nazionale Di Fisica Nucleare, Italy <br />
*SURFsara (Dutch NGI) <br />
*CESNET (Czech NGI) <br />
*Open Science Grid (US)<br />
<br />
The CC is open for additional members. Please email the CC coordinator to join. <br />
<br />
<br />
== Tasks ==<br />
<br />
<br />
=== T1: Cryo-EM in the cloud: bringing clouds to the data ===<br />
<br />
=== T2: GPU portals for biomolecular simulations ===<br />
<br />
=== T3: Integrating the micro- (WeNMR/INSTRUCT) and macroscopic (NeuGRID4you) virtual research communities ===<br />
<br />
=== T4: User support and training ===<br />
<br />
== Deliverables ==<br />
<br />
* D6.4 Fully integrated MoBrain web portal (OTHER), M12 (02.2016) - by T3<br />
* D6.7 Implementation and evaluation of AMBER and/or GROMACS (R), M13 (03.2016) - by T2<br />
* D6.12 GPGPU-enabled web portal(s) for MoBrain (OTHER), M16 (06.2016) - by T2&3<br />
* D6.14 Scipion cloud deployment for MoBrain (OTHER), M21 (11.2016) - by T1<br />
<br />
<br />
[[Category:EGI-Engage]]<br />
<br />
== Technical documentations ==<br />
<br />
=== How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016)===<br />
<br />
*A docker image with opencl + nvidia drivers + DisVis application and one with PowerFit application ready to run on GPU servers have been prepared for Ubuntu, with the goals of checking performance described [https://drive.google.com/file/d/0B44fOmGpGehGWVVTXzZmU3F1WlE/view here]. <br />
*In collaboration with [https://wiki.egi.eu/wiki/EGI-Engage:TASK_JRA2.4_Accelerated_Computing EGI-Engage task JRA2.4 (Accelerated Computing)] the DisVis and PowerFit docker images have been then re-build to work on the SL6 GPU servers which had a NVIDIA driver 319.x supporting only CUDA 5.5 version. These servers form the grid-enabled cluster at CIRMMP, and successful grid submissions to this cluster using the enmr.eu VO has been carried out with the expected performance the 1st of December.<br />
*'''The 14th of July the CIRMMP servers were updated with the latest NVIDIA driver 352.93 supporting now CUDA 7.5 version. DisVis and PowerFit images used are now the ones maintained by the INDIGO-DataCloud project and distributed via dockerhub at [https://github.com/indigo-dc/ansible-role-disvis-powerfit indigodatacloudapps repository].''' <br />
*Here follows a description of the scripts and commands used to run the test:<br />
<pre>$ voms-proxy-init --voms enmr.eu <br />
$ glite-ce-job-submit -o jobid.txt -a -r cegpu.cerm.unifi.it:8443/cream-pbs-batch disvis.jdl</pre> <br />
where disvis.jdl is:&nbsp; <br />
<br />
[<br />
executable = "disvis.sh";<br />
inputSandbox = { "disvis.sh" ,"O14250.pdb" , "Q9UT97.pdb" , "restraints.dat" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=2;<br />
]<br />
<br />
<br />
and disvis.sh is: <br />
<pre>#!/bin/sh<br />
echo hostname=$(hostname) <br />
echo user=$(id) <br />
export WDIR=`pwd` <br />
mkdir res-gpu<br />
echo docker run disvis... <br />
echo starttime=$(date) <br />
docker run --device=/dev/nvidia0:/dev/nvidia0 --device=/dev/nvidia1:/dev/nvidia1 \<br />
--device=/dev/nvidiactl:/dev/nvidiactl --device=/dev/nvidia-uvm:/dev/nvidia-uvm \<br />
-v $WDIR:/home indigodatacloudapps/disvis /bin/sh \<br />
-c 'disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/res-gpu' <br />
echo endtime=$(date) <br />
nvidia-smi --query-accounted-apps=pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv<br />
tar cfz res-gpu.tgz res-gpu</pre> <br />
The example input files specified in the jdl script above were taken from the GitHub repo for disvis available from: [https://github.com/haddocking/disvis https://github.com/haddocking/disvis] <br />
<br />
The performance on the GPGPU grid resources is what is expected for the card type. <br />
<br />
The timings were compared to an in-house GPU node at Utrecht University (GTX680 card): <br />
<pre>GPGPU-type Timing[minutes]<br />
GTX680 19<br />
2xK20 12<br />
1xK20 12</pre> <br />
This indicates that DisVis, as expected, is currently incapable of using both the available GPGPUs. However, we plan to use this testbed to parallelize it on multple GPUs, which should be relatively straightforward.<br />
<br />
For running PowerFit just replace disvis.jdl with a powerfit.jdl like e.g.:&nbsp;<br />
<br />
[<br />
executable = "powerfit.sh";<br />
inputSandbox = { "powerfit.sh" ,"1046.map" , "GroES_1gru.pdb" };<br />
stdoutput = "out.out";<br />
outputsandboxbasedesturi = "gsiftp://localhost";<br />
stderror = "err.err";<br />
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};<br />
GPUNumber=2;<br />
]<br />
<br />
with powerfit.sh as [http://pastebin.com/uugj8ZQv here] and input data taken from [http://www.lip.pt/~david/powerfit-example.tgz here] (courtesy of Mario David)</div>
Verlato