Alert.png The wiki is deprecated and due to be decommissioned by the end of September 2022.
The content is being migrated to other supports, new updates will be ignored and lost.
If needed you can get in touch with EGI SDIS team using operations @ egi.eu.

Difference between revisions of "Applications on Demand Service - architecture"

From EGIWiki
Jump to navigation Jump to search
Line 192: Line 192:
**<long-tail-support@mailman.egi.eu> for all support issues.  
**<long-tail-support@mailman.egi.eu> for all support issues.  
**Managers: Gergely.Sipos@egi.eu, Diego.Scardaci@egi.eu, Peter.Solagna@egi.eu and Giuseppe.LaRocca@egi.eu
**Managers: Gergely.Sipos@egi.eu, Diego.Scardaci@egi.eu, Peter.Solagna@egi.eu and Giuseppe.LaRocca@egi.eu
== E-Token Server  ==
The platform adopted the '''e-Token server''' [1] as a central service to generate PUSPs for science gateways. In a nutshell the e-Token server is a standard-based solution developed by and hosted in INFN Catania for central management of robot certificates and provisioning of digital, short-term proxies from these, allowing seamless and secure access to e-Infrastructures with X.509-based Authorisation layer.
The e-Token server uses the standard JAX-RS framework [2] to implement RESTful Web services in Java technologies and provides, to the end-users, portals and new generation of Science Gateways, a set of REST APIs to generate PUSPs given a unique identifier. PUPS are usually generated starting from standard X.509 certificates. These digital certificates have to be uploaded into one of the secure USB smart cards (e.g. SafeNet Aladdin eToken PRO 32/64 KB) and plugged in the server.
The e-Token server was conceived for providing a credential translator system to Science Gateways and Web Portals that need to interact with the EGI platform for the long-tail (and in general with any e-Infrastructure).
[1] Valeria Ardizzone, Roberto Barbera, Antonio Calanducci, Marco Fargetta, E. Ingrà, Ivan Porro, Giuseppe La Rocca, Salvatore Monforte, R. Ricceri, Riccardo Rotondo, Diego Scardaci, Andrea Schenone: The DECIDE Science Gateway. Journal of Grid Computing 10(4): 689-707 (2012)
[2] Java API for RESTful Web Services (JAX-RS): https://en.wikipedia.org/wiki/Java_API_for_RESTful_Web_Services


== Policies  ==
== Policies  ==

Revision as of 17:19, 4 May 2017

Main EGI.eu operations services Support Documentation Tools Activities Performance Technology Catch-all Services Resource Allocation Security


Applications on Demand Service menu: Home Documentation for providers Documentation for developers Architecture




This page provides information about the 'EGI Applications on Demand' infrastructure, hereafter referred to as the Infrastructure.

The EGI Applications on Demand service (also called AoDs) is the EGI’s response to the requirements of researchers, scattered across Europe, without dedicated access to computational and storage resources, as well as other facilities needeed to run scientific applications.

The new service, through a lightweight user registration process, offers authorised users a grant, with a pre-defined quota of resources, which can be used to run a growing number of scientific applications from a portfolio. The grant to run scientific applications can be renewed or increased upon request. The portfolio of scientific applications is currently composed by a pre-defined set of applications from different scientific areas. This portfolio can be further extended thanks to the contributions of users of the service.

Overview

The EGI Applications on Demand service architecture is presented in Figure 1.


This architecture is composed by the following components:

  • The User Registration Portal (URP) is a web portal which is used to authenticate users interested to access the Infrastructure. For the authentication, the portal relies on the EGI AAI CheckIn service which provides AAI services for both users and service providers. The EGI AAI CheckIn service supports different IdPs, eduGain, the international inter-federation service, and social credentials (e.g. Facebook, Google). During the authentication process, users can provide information about their contact, institutions and research topic. Such information will be taken into account by the operator(s) to evaluate whether the user is entitled to access and use the resources and the scientific applications available in the Platform. This User Registration Portal (URP) is accessible at http://access.egi.eu
  • A catch-all VO called ‘vo.access.egi.eu’ and a pre-allocated pool of HTC and cloud resources configured for supporting the EGI Applications on Demand research activities. This resource pool currently includes cloud resources from Italy (INFN-Catania and INFN-Bari) and Spain (BIFI, CESGA) and HTC clusters from Belgium (VUB), Italy (INFN-Catania and INFN-Bari), Poland (CYFRONET) and Spain (CESGA).
  • The X.509 credentials factory service (also called eToken server)[1] is a standard-based solution developed for central management of robot certificates and the provision of Per-User Sub-Proxy (PUSP) certificates. PUSP allows to identify the individual user that operates using a common robot certificate. Permissions to request robot proxy certificates, and contact the server, are granted only to Science Gateways/portal integrated in the Platform and only when the user is authorized.
  • A set of Science Gateways/portal to host scientific applications that users can run accessing the EGI Applications on Demand service. Currently the following three frameworks are used: The Catania Science Gateway (CSG), the WS-PGRADE/gUSE portal and the Elastic Cloud Computing Cluster (EC3).


[1] Valeria Ardizzone, Roberto Barbera, Antonio Calanducci, Marco Fargetta, E. Ingrà, Ivan Porro, Giuseppe La Rocca, Salvatore Monforte, R. Ricceri, Riccardo Rotondo, Diego Scardaci, Andrea Schenone: The DECIDE Science Gateway. Journal of Grid Computing 10(4): 689-707 (2012)

Available resources

Current available resources grouped by categories:

  • Cloud Resources:
    • 229 vCPU cores
    • 244 GB of RAM
    • 6TB of object storage


  • High-Throughtput Resources:
    • ~13Million HEPSPEC
    • 1.4 TB of disk storage


Type Name Description
Cloud and storage

INFN Catania (upgrading to OpenStack Mitaka in progress)

INFN-CATANIA-STACK site capacity:


  • Number of Virtual CPU cores: 50 
  • Memory: 50GB
  • Scratch/ephemeral storage: 1 TB
  • Public IP addresses: 10
  • Middleware: Openstack
  • Access mode: Opportunistic
INFN Bari

RECAS-BARI site capacity:

  • Number of Virtual CPU cores: 15 
  • Memory: 30GB
  • Scratch/ephemeral storage: 1 TB
  • Middleware: Openstack
  • Access mode: Opportunistic
BIFI

BIFI site capacity:

  • Number of Virtual CPU cores: 100 
  • Memory: 100GB
  • Scratch/ephemeral storage: 2 TB
  • Middleware: Openstack
  • Access mode: Opportunistic
CESGA

CESGA site capacity:

  • Number of Virtual CPU cores: 32
  • Memory: 64GB
  • Scratch/ephemeral storage: 2TB
  • Middleware: OpenNebula
  • Access mode: Pledged
High-Throughput Compute and Storage

INFN Catania 

INFN-CATANIA site capacity: High-Throughput Compute

  • Opportunistic computing time [HEPSPEC-hours]: 1M
  • Max job duration [hours]: 72
  • Min local storage [GB] (scratch space for each core used by the job): 10
  • Min physical memory per core [GB]: 10 GB
  • Other technical requirements: 
  • Middleware: gLite CREAM-CE

File Storage    

  • Opportunistic storage capacity [GB]: 100
INFN Bari

INFN-Bari site capacity:

High-Throughput Compute

  • Opportunistic computing time [HEPSPEC-hours]: 0.5M
  • Max job duration [hours]: 48
  • Min physical memory per core [GB]: 2 GB
  • Middleware: gLite CREAM-CE

File Storage    

  • Opportunistic storage capacity [GB]: 100
CYFRONET-LCG2

CYFRONET-LCG2 site capacity:

High-Throughput Compute

  • Opportunistic computing time [HEPSPEC-hours]: 5M
  • Max job duration [hours]: 72
  • Min physical memory per core [GB]: 3GB
  • Middleware: gLite CREAM-CE and QCG

File Storage    

  • Opportunistic storage capacity [GB]: 500
BEgrid-ULB-VUB

BEgrid-ULB-VUB site capacity:

High-Throughput Compute

  • Opportunistic computing time [HEPSPEC-hours]: 5M
  • Max job duration [hours]: 72
  • Min physical memory per core [GB]: 10GB
  • Middleware: gLite CREAM-CE

File Storage    

  • Opportunistic storage capacity [GB]: 500GB
CESGA

CESGA site capacity:

High-Throughput Compute

  • Opportunistic computing time [HEPSPEC-hours]: 1M
  • Max job duration [hours]: 100
  • Min physical memory per core [GB]: 1GB
  • Middleware: gLite CREAM-CE

File Storage    

  • Opportunistic storage capacity [GB]: 2TB


The HTC, cloud and storage resources of the platform are federated through the 'vo.access.egi.eu' Virtual Organisation of EGI (VO).

Technical details of this VO are the following:

Policies

Acceptable Use Policy (AUP) and Conditions of Use of the 'EGI Applications on Demand Infrastructure'

EGI Applications on Demand Infrastructure Security Policy

Links for administrators

User approval:

  1. Approve affiliation: https://access.egi.eu:8888/modules#/list/Affiliations
  2. Approve resource request: https://e-grant.egi.eu/ltos/auth/login

Gateway and support approval:

Monitoring:

Accounting:

  • Accounting data of platform users: From the EGI Accounting Portal it is possible to check the accounting metrics generated for both grid- and cloud-based resources supporting the vo.access.egi.eu VO. From the top-menu click on 'Restrict View' and 'VO Admin' to check the accounting data of platform users.