Alert.png The wiki is deprecated and due to be decommissioned by the end of September 2022.
The content is being migrated to other supports, new updates will be ignored and lost.
If needed you can get in touch with EGI SDIS team using operations @ egi.eu.

Difference between revisions of "RC FORUM"

From EGIWiki
Jump to navigation Jump to search
Line 196: Line 196:
| NGI_IT  
| NGI_IT  
| ReCaS sites  
| ReCaS sites  
| G. Maggi, G. Russo/INFN-UNIBA-UNINA
| G. Maggi, G. Russo/INFN-UNIBA-UNINA  
|  
|  
Bari: 400 cloud_cores,  400 TB cloud_storage  
Bari: 400 cloud_cores,  400 TB cloud_storage  


Napoli: 200 grid_cores, 200 TB grid_storage; 200 cloud_cores, 200 TB cloud_cores
Napoli: 200 grid_cores, 200 TB grid_storage; 200 cloud_cores, 200 TB cloud_cores  


|  
|  
Line 211: Line 211:
Cosenza: 100 cloud_cores, 100 TB cloud_storage  
Cosenza: 100 cloud_cores, 100 TB cloud_storage  


| TBD
|  
The resource allocation policy and the business model are under definition according to the following&nbsp;<span style="line-height: 1.5em;">&nbsp;guide lines: access shall be granted to institutions collaborating with INFN or UNINA or UNIBA according to agreements defined in the projects (experiments).</span><span style="line-height: 1.5em;">&nbsp; &nbsp; Other users may also be accepted on a "pay per use" base.</span>
 
 
 
| <span style="font-family: Helvetica; font-size: 12px;">OpenStack (VOMS enabled, EC2, web interface, Batch System on demand),&nbsp;</span><span style="font-family: Helvetica; font-size: 12px; line-height: 1.5em;">Object Storage (S3),&nbsp;</span><span style="font-family: Helvetica; font-size: 12px; line-height: 1.5em;">Porting of scientific application over grid/cloud infrastructure, User Support, MPI, OpenMP,&nbsp;</span><span style="font-family: Helvetica; font-size: 12px; line-height: 1.5em;">Lustre HPC file-system,&nbsp;GPU Computing,</span><div style="font-family: Helvetica; font-size: 12px;">Infiniband, Hadoop,</div><div style="font-family: Helvetica; font-size: 12px;">Tape Library (by the end of 2014)</div><div style="font-family: Helvetica; font-size: 12px;">Software Provisioning&nbsp;<span style="line-height: 1.5em;">&nbsp;for Grid/Local Batch access,&nbsp;</span><span style="font-size: 13px; font-family: sans-serif; line-height: 1.5em;"> Science_gateways</span></div>  
| <span style="font-family: Helvetica; font-size: 12px;">OpenStack (VOMS enabled, EC2, web interface, Batch System on demand),&nbsp;</span><span style="font-family: Helvetica; font-size: 12px; line-height: 1.5em;">Object Storage (S3),&nbsp;</span><span style="font-family: Helvetica; font-size: 12px; line-height: 1.5em;">Porting of scientific application over grid/cloud infrastructure, User Support, MPI, OpenMP,&nbsp;</span><span style="font-family: Helvetica; font-size: 12px; line-height: 1.5em;">Lustre HPC file-system,&nbsp;GPU Computing,</span><div style="font-family: Helvetica; font-size: 12px;">Infiniband, Hadoop,</div><div style="font-family: Helvetica; font-size: 12px;">Tape Library (by the end of 2014)</div><div style="font-family: Helvetica; font-size: 12px;">Software Provisioning&nbsp;<span style="line-height: 1.5em;">&nbsp;for Grid/Local Batch access,&nbsp;</span><span style="font-size: 13px; font-family: sans-serif; line-height: 1.5em;"> Science_gateways</span></div>  
| LCG-WLCG, BioVeL, Belle2, KM3Net, PRISMA, ELIXIR-ITA
| LCG-WLCG, BioVeL, Belle2, KM3Net, PRISMA, ELIXIR-ITA

Revision as of 13:51, 20 November 2013


This page provides an overview of which NGIs and Resource Centres (or federated resource centres) expressed interest in being a resource provider through the grid services or the EGI federated cloud in the future to support new user communities.

Resource Centre representatives are requested to provide below information about the available capacity, the business model and local policies that need to be met to make these resources available to new user communities, and the international collaborations already supported.

This information will be used to define the available capacity when approaching international collaborations and the conditions to make it available.

Resource Centres that will become part of the EGI Federated Cloud will become storage and/or compute cloud providers to satisfy a number of use cases (IaaS, PaaS and SaaS). Many Proof of Concepts have been successfully demonstrated on the EGI Federated Cloud which is based on the federation of storage and compute resources through standard interfaces (OCCI and CDMI). The EGI Federated Cloud will be part of the hybrid cloud of Helix Nebula.

NGI Resource Centre Name Contact Planned resources by April 2014

Please specify resources that the RC will provide for each category (Grid storage, grid computing, cloud storage, cloud computing)

Planned resources by December 2014

Please specify resources that the RC will provide for each category (Grid storage, grid computing, cloud storage, cloud computing)

Resource allocation policies and business models

Provide information about the local policies for gaining access to the resources and the foreseen model (free, pay per use, etc.)

Area of expertise

Areas of technical and user support expertise

National and international collaborations

Current or planned collaborations with national/international user-oriented projects or Research Infrastructures

CERN CERN D. Foster
NDGF NDGF O. Smirnova
NGI_CH UNIBE-LHEP, UZH S. Maffioletti, Sigve Haug UNIBE-LHEP will offer access to its grid (ARC-based) infrastructure; UZH will offer Cloud-based services (still exploring EGI fedcloud) Cloud infrastructure: free for projects of national interest (or under the national program business model). TBD for other usecases. Openstack, Application integration, Science gateways. National: Swiss Academic Compute Cloud, systemsX.ch. International: Elixir, SCI-BUS, LCG-WLCG,
NGI_CZ CESNET M. Ruda
NGI_DE JUELICH, FhG
NGI_DE KIT A. Heiss, A. Petzold, A. Streit
NGI_DK DCSC Regional operating centres R. Belso
NGI_GRNET GRNET SA K. Koumantaros  We will offer a % of our GRID and CLOUD infrastructure.  TBA TBA Operations, Monitoring,A/R, APPDB, Software Privisioning EUDAT, PRACE, FORGE, CLARIN-GR, Stratuslab, Hellasgrid, CHAIN-RED, GN3, Open Discovery Space, GREENET, LDA, ICT-AGRI, CELAR
NGI_HR D. Dobrenic
NGI_SE J. Koster
NGI_PT NCG-INGRID-PT G. Borges, J. Gomes Grid computing: 2900HS06, Grid storage: 10TB, Cloud computing: 128 cores, Cloud storage: 10TB Same capacity is secured but upgrade expected in 2014 Free for projects of national interest and Pay per use for others Virtualization, HPC, services high-availability, grid core services, monitoring, application porting, parallel computing, Infiniband, Lustre, Networking, X.509, Civil engineering, life sciences, ICT, astroparticles, high energy physics, Portuguese Grid Initiative, IBERGRID, LifeWatch
NGI_ES PIC J. Fix
NGI_ES CSIC J. Marco/I. Campos Grid storage: ~2 PB, Grid computing: ~4000 cores  Cloud storage: ~1 PB  Cloud computing: ~2000 cores Grid storage: ~2 PB, Grid computing: ~4000 cores  Cloud storage: ~2 PB; Cloud computing: ~4000 cores Resource allocation by VOs /centers, 20% resources are open to projects agreed within NGI, 10% resources prioritized under pay per use (at 0.05 euros/core for "external" users) Cloud: OpenStack, VOMS, Image Contextualization, MPI/parallel framework, Integration of supercomputing resources, GPFS/HPC storage LHC-WLCG, LIFEWATCH, PLANCK, FET projects, ICT projects, SMEs projects (modeling, parallel)
NGI_ES CESGA I. Lopez/C. Fernandez Grid storage: ~100 TB, Grid computing: ~720 cores, Cloud storage: ~10 TB, Cloud computing: ~280 cores Grid storage: ~100 TB, Grid computing: ~720 cores, Cloud storage: ~10 TB, Cloud computing: ~280 cores Resource allocation by VOs /centers, 20% resources are open to projects agreed within NGI, resources can be prioritized under pay per use (at 0.05 euros/core for external users) Cloud: OpenNebula, Image Contextualization, MPI/parallel framework, Integration of supercomputing resources, Lustre, Grid/Cloud Accounting, Software provisioning LHC-WLCG, SMEs projects
NGI_ES BIFI A. Tarancon/R. Valles Grid storage: 4TB, Grid computing: 864 cores - Cloud storage: 4TB - Cloud computing: 432 cores Grid storage: 4TB, Grid computing: 864 cores - Cloud storage: 6TB - Cloud computing: 600 cores The access to grid resources is free for research institutes and users that belong to research groups from University. The access to cloud resources is the same for researchers from some University or research groups but the companies have to pay depending on the resources they need. We provide technical support for both, users and system administrators from

other research institutes. One of this support is to fusion VO within EGI.

CloudSME, Collaboration with users and companies, SCI-BUS, Development and deployment of Scientific Gateways and users support.
NGI_ES UPVLC I. Blanquer/M. Caballer 90 Grid cores and 40 cloud virtual cores. 1TB of Grid Storage and 1TB of cloud storage. 90 Grid cores and 100 cloud virtual cores. 1TB of Grid Storage and 1TB of cloud storage. Free access to research groups from Universities and public research centres. For cloud infrastructures, we will request a short report on the purpose of the usage. We will offer additional services such as VM catalogues, automatic contextualization and automatic scaling. We will provide technical support for users and application developers on exploiting the infrastructure and the services. EUBrazilOpenBio, EUBrazilCC, CODECLOUD (National project).
NGI_FR IN2P3 G. Lamanna, Pierre-Etienne Macchi, F. Chollet (dep)
NGI_FR RC Federation G. Mathieu
NGI_IT ReCaS sites G. Maggi, G. Russo/INFN-UNIBA-UNINA

Bari: 400 cloud_cores,  400 TB cloud_storage

Napoli: 200 grid_cores, 200 TB grid_storage; 200 cloud_cores, 200 TB cloud_cores

Bari:  3000 cloud_cores, 1 PB cloud_storage

Napoli:  3000 cloud_cores, 1 PB cloud_storage

Catania:  200 cloud_cores, 100 TB cloud_storage

Cosenza: 100 cloud_cores, 100 TB cloud_storage

The resource allocation policy and the business model are under definition according to the following  guide lines: access shall be granted to institutions collaborating with INFN or UNINA or UNIBA according to agreements defined in the projects (experiments).    Other users may also be accepted on a "pay per use" base.


OpenStack (VOMS enabled, EC2, web interface, Batch System on demand), Object Storage (S3), Porting of scientific application over grid/cloud infrastructure, User Support, MPI, OpenMP, Lustre HPC file-system, GPU Computing,
Infiniband, Hadoop,
Tape Library (by the end of 2014)
Software Provisioning  for Grid/Local Batch access,  Science_gateways
LCG-WLCG, BioVeL, Belle2, KM3Net, PRISMA, ELIXIR-ITA
NGI_IT INFN T1 G. Maron, L. dell'Agnello
NGI Latvia Kaspars Krampis EGI fed cloud: EGI fed cloud:
NGI_MD 2 sites/RENAM P. Vaseanovici, N. Iliuha open to new user communities and collaborations
NGI_NL NIKHEF/SARA A. Berg, M. Bouwhuis, J. Templon, R. Trompert
NGI_PL CYFRONET T. Szepieniec

Grid: ~20k cores, ~1PB

Cloud: ~120 cores, ~4TB

up to 50% as cloud resources Users need to apply for a resource allocation through PL-Grid. A representative from Polish Science confirming scientific collaboration is needed. Service Level Management, Operations in federated infrastructure, OpenStack, OpenNebula CTA, EPOS, various national domain-specific infrastructures
NGI_UK STFC D. Britton, N. Geddes, J. Gordon