Alert.png The wiki is deprecated and due to be decommissioned by the end of September 2022.
The content is being migrated to other supports, new updates will be ignored and lost.
If needed you can get in touch with EGI SDIS team using operations @ egi.eu.

Difference between revisions of "Applications on Demand Service - architecture"

From EGIWiki
Jump to navigation Jump to search
 
(182 intermediate revisions by 4 users not shown)
Line 1: Line 1:
{{Template:Op menubar}} {{Template:LTOS_menubar}} {{TOC_right}}  
{{DeprecatedAndMovedTo|new_location=https://docs.egi.eu/users/applications-on-demand/}}


= Technical and architecture details  =
{{Template:LTOS_menubar}} {{TOC_right}}


== User Registration Portal  ==
<b>Applications on Demand (AoD) Service Information pages</b>


The User Registration Portal (URP) of the platform is hosted by CYFRONET in Poland and serves as the entry point for users. The portal offers login with social or EGI SSO accounts, and allow users to manage their profiles, resource requests and a central hub to access the connected science gateways. The portal is used by the user support team to review user profiles and to evaluate the users' resource requests. The portal is accessible at http://access.egi.eu.
The [https://www.egi.eu/services/applications-on-demand/ EGI Applications on Demand service] (AoD) is the EGI’s response to the requirements of researchers who are interested in using applications in a on-demand fashion together with the compute and storage environment needed to compute and store data.


== Available resources  ==
[https://marketplace.egi.eu The Service can be accessed through the EGI Marketplace]<br/>


Current available resources grouped by categories.
== Service architecture ==
The EGI Applications on Demand service architecture is presented in Figure.


=== Cloud computing and object storage ===
<!--[[Image:AoDs.png|center|600px|AoDs.png]] <br>-->
*Resource Centre: [https://goc.egi.eu/portal/index.php?Page_Type=Site&id=115 CESGA]
**Number of Virtual CPU cores: 32 <br>  
**Memory: 64GB <br>  
**Scratch/ephemeral storage: 2TB<br>
**Middleware: OpenNebula
**Access mode: Pledged


*[https://goc.egi.eu/portal/index.php?Page_Type=Site&id=861 INFN-CATANIA-STACK]
[[Image:AoDs_architecturev2.jpg|center|600px|AoDs_architecturev2.jpg]] <br>
**Number of Virtual CPU cores: 20&nbsp; <br>
**Memory: 50GB <br>
**Scratch/ephemeral storage: 1 TB <br>  
**Public IP addresses: 10 <br>
**Middleware: Openstack
**Access mode: Opportunistic


*[https://goc.egi.eu/portal/index.php?Page_Type=Site&id=476 RECAS-BARI]
This service architecture is composed by the following components:
**Number of Virtual CPU cores: 15&nbsp; <br>  
<!--* The <b>User Registration Portal (URP)</b> is a web portal which is used to authenticate users interested to access the Infrastructure. For the authentication, the portal relies on the EGI AAI CheckIn service which provides AAI services for both users and service providers. The EGI AAI CheckIn service supports different IdPs, eduGain, the international inter-federation  service, and social credentials (e.g. Facebook, Google). During the authentication process, users can provide information about their contact, institutions and research topic. Such information will be taken into account by the operator(s) to evaluate whether the user is entitled to access and use the resources and the applications available in the Platform. This User Registration Portal (URP) is accessible at http://access.egi.eu-->
**Memory: 30GB <br>
**Scratch/ephemeral storage: 1 TB<br>
**Middleware: Openstack
**Access mode: Opportunistic


=== High-Throughput Compute and Storage ===
* A <b>catch-all</b> VO called ‘vo.access.egi.eu’ and a <b>pre-allocated pool of HTC and cloud resources</b> configured for supporting the EGI Applications on Demand research activities. This resource pool currently includes cloud resources from Italy (INFN-Catania), Spain (CESGA) and France (IN2P3-IRES).


*[https://goc.egi.eu/portal/index.php?Page_Type=Service&id=2441 CESGA] site capacity:
* The <b>X.509 credentials factory service</b> (also called eToken server)[1]  is a standard-based solution developed for central management of robot certificates and the provision of Per-User Sub-Proxy (PUSP) certificates. PUSP allows to identify the individual user that operates using a common robot certificate. Permissions to request robot proxy certificates, and contact the server, are granted only to Science Gateways/portal integrated in the Platform and only when the user is authorized.
**Opportunistic computing time [HEPSPEC-hours]: 1M
**Max job duration [hours]: 100
**Min physical memory per core [GB]: 1GB
**Middleware: gLite CREAM-CE
**File Storage
***Opportunistic storage capacity [GB]: 0.2TB


* A set of <b>Science Gateways/portal</b> to host applications that users can run accessing the EGI Applications on Demand service. Currently the following three frameworks are used: The FutureGateway Science Gateway (FGSG), the WS-PGRADE/gUSE  portal and the Elastic Cloud Computing Cluster (EC3).


* The <b>EGI VMOps dashboard</b> provides a Graphical User Interface (GUI) to perform Virtual Machine (VM) management operations on the EGI Federated Cloud. The dashboard introduces a user-friendly environment where users can create and manage VMs along with associated storage devices on any of the EGI Federated Cloud providers. It provides a complete view of the deployed applications and a unified resource management experience, independently of the technology driving each of the resource centres of the federation. Users can create new infrastructure topologies, which include a set of VMs, their associated storage and contextualization, a wizard-like builder that guides them through the selection of the virtual appliances, virtual organisation, resource provider, and the final customisation of the VMs that will be deployed. Its tight integration with the AppDB Cloud Marketplace allows for an automatic discovery of the appliances which are supported at each resource provider Once a topology has been created, VMOps allows management actions to be applied both on the set of VMs comprising a topology and on fine-grained actions on each individual VM.


{| cellspacing="1" cellpadding="1" border="1" width="60%"
* The <b>EGI Notebooks</b>, build on top of JupyterHub offers an open-source web application where users can create and share documents that contain live code, equations, visualization and explanatory text.
|-
| Type
| Name
| Description
|-
| rowspan="3" | Cloud and storage <br> <br>  
| INFN Catania&nbsp;
|
INFN-CATANIA-STACK site capacity:


*Number of Virtual CPU cores: 20&nbsp; <br>
*Memory: 50GB <br>
*Scratch/ephemeral storage: 1 TB <br>
*Public IP addresses: 10 <br>
*Middleware: Openstack
*Access mode: Opportunistic


|-
[1] Valeria Ardizzone, Roberto Barbera, Antonio Calanducci, Marco Fargetta, E. Ingrà, Ivan Porro, Giuseppe La Rocca, Salvatore Monforte, R. Ricceri, Riccardo Rotondo, Diego Scardaci, Andrea Schenone: The DECIDE Science Gateway. Journal of Grid Computing 10(4): 689-707 (2012)
| INFN Bari <br>  
<br/>
|
RECAS-BARI site capacity:


*Number of Virtual CPU cores: 15&nbsp; <br>
== Available resources  ==
*Memory: 30GB <br>
*Scratch/ephemeral storage: 1 TB<br>
*Middleware: Openstack
*Access mode: Opportunistic


|-
Current available resources grouped by categories:
| CESGA <br>  
* <u>Cloud Resources</u>:
|
** 246 vCPU cores
CESGA site capacity:
** 781 GB of RAM
** 2.1TB of object storage


*Number of Virtual CPU cores: 32 <br>  
* <u>High-Throughtput Resources</u>:
*Memory: 64GB <br>  
** 9.5Million HEPSPEC
*Scratch/ephemeral storage: 2TB<br>
** 1.4 TB of disk storage
*Middleware: OpenNebula
*Access mode: Pledged


|-
For more details about the resources allocated for supporting this service, please click [[here|here]].
| rowspan="5" | High-Throughput Compute and Storage <br> <br>
<br/>
| INFN Catania&nbsp;
|
GILDA-INFN-CATANIA site capacity: High-Throughput Compute <br>  


*Opportunistic computing time [HEPSPEC-hours]: 1M <br>
== Operational Level Agreements ==
*Max job duration [hours]: 72 <br>
* EC3: https://documents.egi.eu/document/3370
*Min local storage [GB] (scratch space for each core used by the job): 10<br>
* FGSG: https://documents.egi.eu/document/2782
*Min physical memory per core [GB]: 10 GB <br>
* WS-PGRADE/gUSE: https://documents.egi.eu/document/3368
*Other technical requirements:&nbsp; <br>
* Cloud resource providers: https://documents.egi.eu/document/2773
*Middleware: gLite CREAM-CE <br>
<br/>


File Storage&nbsp;&nbsp;&nbsp;&nbsp;<br>  
== GGUS Support Units (SUs) ==
* FutureGateway Science Gateway (FGSG):
** The FGSG GGUS SU is available under the Core Services category
* Elastic Cloud Computing Cluster (EC3):
** The EC3 GGUS SU is available under the Core Services category
* WS-PGRADE/gUSE portal:
** The WS-PGRADE/gUSE GGUS SU is available under the Core Services category
* The EGI Notebooks:
** The EGI Notebooks GGUS SU is available under the EGI Notebooks category
* The EGI VMOps dashboard:
** The AppDB GGUS SU is available under the Core Services category
<br/>


*Opportunistic storage capacity [GB]: 100<br>
== Links for administrators  ==


|-
User approval:  
| INFN Bari <br>
* VO membership management interface in PERUN: https://perun.metacentrum.cz/cert/gui/
|
* To register in the VO (relevant for Science Gateways robot certificates and for support staff): https://perun.metacentrum.cz/cert/registrar/?vo=vo.access.egi.eu
INFN-Bari site capacity:
<br/>
 
High-Throughput Compute <br>
 
*Opportunistic computing time [HEPSPEC-hours]: 0.5M <br>
*Max job duration [hours]: 48<br>
*Min physical memory per core [GB]: 2 GB <br>
*Middleware: gLite CREAM-CE <br>
 
File Storage&nbsp;&nbsp;&nbsp;&nbsp;<br>
 
*Opportunistic storage capacity [GB]: 100<br>
 
|-
| CYFRONET-LCG2
|
CYFRONET-LCG2 site capacity:
 
High-Throughput Compute <br>
 
*Opportunistic computing time [HEPSPEC-hours]: 5M <br>
*Max job duration [hours]: 72<br>
*Min physical memory per core [GB]: 3GB <br>
*Middleware: gLite CREAM-CE and QCG<br>
 
File Storage&nbsp;&nbsp;&nbsp;&nbsp;<br>
 
*Opportunistic storage capacity [GB]: 500<br>
 
|-
| BEgrid-ULB-VUB
|
BEgrid-ULB-VUB site capacity:
 
High-Throughput Compute <br>
 
*Opportunistic computing time [HEPSPEC-hours]: 5M <br>
*Max job duration [hours]: 72<br>
*Min physical memory per core [GB]: 10GB <br>
*Middleware: gLite CREAM-CE <br>
 
File Storage&nbsp;&nbsp;&nbsp;&nbsp;<br>
 
*Opportunistic storage capacity [GB]: 500GB<br>
 
|-
| CESGA <br>
|
CESGA site capacity:
 
High-Throughput Compute <br>
 
*Opportunistic computing time [HEPSPEC-hours]: 1M <br>
*Max job duration [hours]: 100<br>
*Min physical memory per core [GB]: 1GB <br>
*Middleware: gLite CREAM-CE <br>
 
File Storage&nbsp;&nbsp;&nbsp;&nbsp;<br>
 
*Opportunistic storage capacity [GB]: 0.2TB<br>
 
|}
 
The HTC, cloud and storage resources of the platform are federated through the 'vo.access.egi.eu' Virtual Organisation of EGI (VO). Technical details of this VO are the following:
 
*ID Card in the EGI Operations Portal: http://operations-portal.egi.eu/vo/view/voname/vo.access.egi.eu
*Name: vo.access.egi.eu
*Scope: Global
*Homepage URL: https://wiki.egi.eu/wiki/Long-tail_of_science
*Acceptable use policy for users: https://documents.egi.eu/document/2635
*Discipline: Support Activities
*VO Membership management: VOMS+PERUN
**perun.cesnet.cz. The enrollment url is https://perun.metacentrum.cz/perun-registrar-cert/?vo=vo.access.egi.eu  
**voms1.grid.cesnet.cz and voms2.grid.cesnet.cz
*Contacts:
**&lt;long-tail-support@mailman.egi.eu&gt; for all support issues.
**Managers: Gergely.Sipos@egi.eu, Diego.Scardaci@egi.eu, Peter.Solagna@egi.eu
 
== Per-user sub-proxies  ==
 
The purpose of a '''per-user sub-proxy (PUSP)''' is to allow identification of the individual users that operate using a common robot certificate. A common example is where a web portal (e.g., a scientific gateway) somehow identifies its user and wishes to authenticate as that user when interacting with EGI resources. This is achieved by creating a proxy credential from the robot credential with the proxy certificate containing user-identifying information in its additional proxy CN field. The user-identifying information may be pseudo-anonymised where only the portal knows the actual mapping.
 
Example of a Per-User Sub-Proxy (PUSP):
<pre>subject  &nbsp;: /C=IT/O=INFN/OU=Robot/L=Catania/CN=Robot: EGI Training Service - XXXXX/CN=user:test1/CN=1286259828
issuer  &nbsp;: /C=IT/O=INFN/OU=Robot/L=Catania/CN=Robot: EGI Training Service - XXXXX/CN=user:test1
identity &nbsp;: /C=IT/O=INFN/OU=Robot/L=Catania/CN=Robot: EGI Training Service - XXXXX
type    &nbsp;: RFC3820 compliant impersonation proxy
strength &nbsp;: 1024
path    &nbsp;: /home/XXXXX/proxy.txt
timeleft &nbsp;: 23:59:15
key usage&nbsp;: Digital Signature, Key Encipherment, Data Encipherment
=== VO training.egi.eu extension information ===
VO      &nbsp;: training.egi.eu
subject  &nbsp;: /C=IT/O=INFN/OU=Robot/L=Catania/CN=Robot: EGI Training Service - XXXXX
issuer  &nbsp;: /DC=org/DC=terena/DC=tcs/OU=Domain Control Validated/CN=voms1.grid.cesnet.cz
attribute&nbsp;: /training.egi.eu/Role=NULL/Capability=NULL
timeleft &nbsp;: 23:59:17
uri      &nbsp;: voms1.grid.cesnet.cz:15014
</pre>  
== E-Token Server  ==
 
The platform adopted the '''e-Token server''' [1] as a central service to generate PUSPs for science gateways. In a nutshell the e-Token server is a standard-based solution developed by and hosted in INFN Catania for central management of robot certificates and provisioning of digital, short-term proxies from these, allowing seamless and secure access to e-Infrastructures with X.509-based Authorisation layer.
 
The e-Token server uses the standard JAX-RS framework [2] to implement RESTful Web services in Java technologies and provides, to the end-users, portals and new generation of Science Gateways, a set of REST APIs to generate PUSPs given a unique identifier. PUPS are usually generated starting from standard X.509 certificates. These digital certificates have to be uploaded into one of the secure USB smart cards (e.g. SafeNet Aladdin eToken PRO 32/64 KB) and plugged in the server.
 
The e-Token server was conceived for providing a credential translator system to Science Gateways and Web Portals that need to interact with the EGI platform for the long-tail (and in general with any e-Infrastructure).
 
[1] Valeria Ardizzone, Roberto Barbera, Antonio Calanducci, Marco Fargetta, E. Ingrà, Ivan Porro, Giuseppe La Rocca, Salvatore Monforte, R. Ricceri, Riccardo Rotondo, Diego Scardaci, Andrea Schenone: The DECIDE Science Gateway. Journal of Grid Computing 10(4): 689-707 (2012)
 
[2] Java API for RESTful Web Services (JAX-RS): https://en.wikipedia.org/wiki/Java_API_for_RESTful_Web_Services
 
== Policies  ==


*Acceptable Use Policy and Conditions of Use of the EGI Platform for the Long-tail of Science: https://documents.egi.eu/document/2635
== Monitoring ==
*[https://documents.egi.eu/public/ShowDocument?docid=2734|LToS Service Scoped Security Policy]


== Links for administrators ==
* The EGI Applications on Demand service components have been registered in the [http://gocdb.egi.eu/ GOCDB] and connected with the EGI monitoring system based on [http://argo.egi.eu ARGO].
* By default ARGO automatically gathered the services endpoints from the GOCDB and implements simple 'https' checks using standard NAGIOS  probes. If necessary new ones can be easily developed and added.
* The following service components are monitored by ARGO:
** EGI-FGSG,
** EGI-NOTEBOOKS,
** GRIDOPS-WS-PGRADE,
** GRIDOPS-APPDB, and
** GRIDOPS-EC3
* To monitor the service components, check the following [http://argo.egi.eu/egi/OPS-MONITOR-Critical ARGO] report page.
<br/>


User approval:  
== Accounting ==
* Accounting data about the VO users can be checked here: https://accounting.egi.eu/
* From the EGI Accounting Portal it is possible to check the accounting metrics generated for both grid- and cloud-based resources supporting the vo.access.egi.eu VO.
* From the top-menu click on <u>'Restrict View'</u> and <u>'VO Admin'</u> to check the accounting data of platform users.


#Approve affiliation: https://access.egi.eu:8888/modules#/list/Affiliations
#Approve resource request: https://e-grant.egi.eu/ltos/auth/login


Gateway and support approval:  
[[Image:EGI_AoDs_Accounting.png|center|800px|EGI_AoDs_Accounting.png]]


*VO membership management interface in PERUN: https://perun.metacentrum.cz/cert/gui/  
== Policies & documents ==
*To register in the VO (relevant for gateway robot certificates and for support staff): https://perun.metacentrum.cz/cert/registrar/?vo=vo.access.egi.eu
*[https://documents.egi.eu/public/ShowDocument?docid=2635 Acceptable Use Policy (AUP) and Conditions of Use of the 'EGI Applications on Demand  (AoD) service']<br/>
*[https://documents.egi.eu/public/ShowDocument?docid=2734 EGI Applications on Demand (AoD) service Security Policy]<br/>
*[https://documents.egi.eu/public/ShowDocument?docid=3127 Doc vetting manual for the EGI Support team]
<br/>


Monitoring:  
== Useful Links  ==
* VO ID card: http://operations-portal.egi.eu/vo/view/voname/vo.access.egi.eu
* Name: <code>vo.access.egi.eu</code>
* Scope: Global
* Disciplines: Support Activities
* VOMS servers: voms1/voms2.grid.cesnet.cz
* VO Membership Management: https://perun.metacentrum.cz/perun-registrar-cert/?vo=vo.access.egi.eu
* Contacts:
** EGI Support Team: applications-platform-support@mailman.egi.eu
** Managers:
*** Giuseppe La Rocca (<giuseppe.larocca@egi.eu>)
*** Diego Scardaci (<diego.scardaci@egi.eu>)
*** Gergely Sipos (<gergely.sipos@egi.eu>)
<br/>


*Detailed accounting data about the VO users can be obtained by the VO managers at https://accounting-devel.egi.eu/user/voadm.php
== Roadmap  ==
*To see the list of VO members: https://voms1.grid.cesnet.cz:8443/voms/vo.access.egi.eu/user/search.action


Accounting:
*<strike>Integration of the JupyterHub as a Service (mid 2018)</strike>
*<strike>Upgrade of the CSG to Liferay 7 to use the Future Gateway (FG) API server developed in the context of the INDIGO-DataCloud project (2019)</strike>
*<strike>Integration the open-source serverless Computing Platform for Data-Processing Applications</strike>
*<strike>Improve the user's experience in the EC3 portal (mid 2018)</strike>
*<strike>Integrate the HNSciCloud voucher schemes in the EC3 portal (2018)</strike>
*<strike>Joined the IN2P3-IRES cloud provider in the vo.access.egi.eu VO (2019)</strike>
*<strike>Configure no.1 instance of PROMINENCE service for the vo.access.egi.eu VO (2019)</strike>
*<strike>Create an Ansible receipt to run big data workflows with ECAS/Ophidia framework in EGI (2019)</strike>
*<strike>Agree OLAs with additional cloud providers of the EGI Federation (2019)</strike>


*Accounting data of platform users: ...
<!--
*...


= Roadmap  =
= Roadmap  =
Line 286: Line 173:
[https://goc.egi.eu/portal/index.php?Page_Type=Site&id=1525 GRIDOPS-LTOS]<br>  
[https://goc.egi.eu/portal/index.php?Page_Type=Site&id=1525 GRIDOPS-LTOS]<br>  


[https://goc.egi.eu/portal/index.php?Page_Type=Site&id=1585 GRIDOPS-WS-PGRADE]<br>  
[https://goc.egi.eu/portal/index.php?Page_Type=Site&id=1585 GRIDOPS-WS-PGRADE]<br>
 
[https://goc.egi.eu/portal/index.php?Page_Type=Site&id=1805 GRIDOPS-EC3]<br>  


| DONE
| DONE
Line 636: Line 525:
| <br>  
| <br>  
| <br>  
| <br>  
| <br>
|
|-
| <br>
| [https://ggus.eu/index.php?mode=ticket_info&ticket_id=125633 <strike>LToS Long-tail-user-requests missing information in new affiliation email</strike>]<br>
| High<br>
| Roksana<br>
| <br>
| Started<br>
| <br>
| DONE
|-
| <br>
| [https://ggus.eu/index.php?mode=ticket_info&ticket_id=125643 LToS admin portal fake profile]<br>
| High<br>
| Roksana<br>
| <br>
| Started<br>
| <br>
|
|-
| <br>
| [https://ggus.eu/index.php?mode=ticket_info&ticket_id=125644 LToS admin portal logout impossible]<br>
| High<br>
| Roksana<br>
| <br>
| Started<br>
| <br>
|
|-
| <br>
| [https://rt.egi.eu/rt/Ticket/Display.html?id=12233 LToS admin portal]<br>
| High<br>
| Roksana<br>
| <br>
| Started<br>
| <br>
|
|-
| <br>
| [https://ggus.eu/index.php?mode=ticket_info&ticket_id=125638 LToS admin portal misleading icons]<br>
| High<br>
| Roksana<br>
| <br>
| Started<br>
| <br>
|
|-
| <br>
| [https://rt.egi.eu/rt/Ticket/Display.html?id=12389 Edit approved resource allocation requests]<br>
| High<br>
| Roksana<br>
| <br>
| Started<br>
| <br>
|
|-
| <br>
| [https://rt.egi.eu/rt/Ticket/Display.html?id=12234 LToS admin portal information missing to validate applicants]<br>
| High<br>
| Roksana<br>
| <br>
| Started<br>
| <br>  
| <br>  
|  
|  
|}
|}
-->

Latest revision as of 10:19, 23 July 2021

Alert.png This article is Deprecated and has been moved to https://docs.egi.eu/users/applications-on-demand/.



Applications on Demand Service menu: Home Documentation for providers Documentation for developers Architecture




Applications on Demand (AoD) Service Information pages

The EGI Applications on Demand service (AoD) is the EGI’s response to the requirements of researchers who are interested in using applications in a on-demand fashion together with the compute and storage environment needed to compute and store data.

The Service can be accessed through the EGI Marketplace

Service architecture

The EGI Applications on Demand service architecture is presented in Figure.


AoDs_architecturev2.jpg


This service architecture is composed by the following components:

  • A catch-all VO called ‘vo.access.egi.eu’ and a pre-allocated pool of HTC and cloud resources configured for supporting the EGI Applications on Demand research activities. This resource pool currently includes cloud resources from Italy (INFN-Catania), Spain (CESGA) and France (IN2P3-IRES).
  • The X.509 credentials factory service (also called eToken server)[1] is a standard-based solution developed for central management of robot certificates and the provision of Per-User Sub-Proxy (PUSP) certificates. PUSP allows to identify the individual user that operates using a common robot certificate. Permissions to request robot proxy certificates, and contact the server, are granted only to Science Gateways/portal integrated in the Platform and only when the user is authorized.
  • A set of Science Gateways/portal to host applications that users can run accessing the EGI Applications on Demand service. Currently the following three frameworks are used: The FutureGateway Science Gateway (FGSG), the WS-PGRADE/gUSE portal and the Elastic Cloud Computing Cluster (EC3).
  • The EGI VMOps dashboard provides a Graphical User Interface (GUI) to perform Virtual Machine (VM) management operations on the EGI Federated Cloud. The dashboard introduces a user-friendly environment where users can create and manage VMs along with associated storage devices on any of the EGI Federated Cloud providers. It provides a complete view of the deployed applications and a unified resource management experience, independently of the technology driving each of the resource centres of the federation. Users can create new infrastructure topologies, which include a set of VMs, their associated storage and contextualization, a wizard-like builder that guides them through the selection of the virtual appliances, virtual organisation, resource provider, and the final customisation of the VMs that will be deployed. Its tight integration with the AppDB Cloud Marketplace allows for an automatic discovery of the appliances which are supported at each resource provider Once a topology has been created, VMOps allows management actions to be applied both on the set of VMs comprising a topology and on fine-grained actions on each individual VM.
  • The EGI Notebooks, build on top of JupyterHub offers an open-source web application where users can create and share documents that contain live code, equations, visualization and explanatory text.


[1] Valeria Ardizzone, Roberto Barbera, Antonio Calanducci, Marco Fargetta, E. Ingrà, Ivan Porro, Giuseppe La Rocca, Salvatore Monforte, R. Ricceri, Riccardo Rotondo, Diego Scardaci, Andrea Schenone: The DECIDE Science Gateway. Journal of Grid Computing 10(4): 689-707 (2012)

Available resources

Current available resources grouped by categories:

  • Cloud Resources:
    • 246 vCPU cores
    • 781 GB of RAM
    • 2.1TB of object storage
  • High-Throughtput Resources:
    • 9.5Million HEPSPEC
    • 1.4 TB of disk storage

For more details about the resources allocated for supporting this service, please click here.

Operational Level Agreements


GGUS Support Units (SUs)

  • FutureGateway Science Gateway (FGSG):
    • The FGSG GGUS SU is available under the Core Services category
  • Elastic Cloud Computing Cluster (EC3):
    • The EC3 GGUS SU is available under the Core Services category
  • WS-PGRADE/gUSE portal:
    • The WS-PGRADE/gUSE GGUS SU is available under the Core Services category
  • The EGI Notebooks:
    • The EGI Notebooks GGUS SU is available under the EGI Notebooks category
  • The EGI VMOps dashboard:
    • The AppDB GGUS SU is available under the Core Services category


Links for administrators

User approval:


Monitoring

  • The EGI Applications on Demand service components have been registered in the GOCDB and connected with the EGI monitoring system based on ARGO.
  • By default ARGO automatically gathered the services endpoints from the GOCDB and implements simple 'https' checks using standard NAGIOS probes. If necessary new ones can be easily developed and added.
  • The following service components are monitored by ARGO:
    • EGI-FGSG,
    • EGI-NOTEBOOKS,
    • GRIDOPS-WS-PGRADE,
    • GRIDOPS-APPDB, and
    • GRIDOPS-EC3
  • To monitor the service components, check the following ARGO report page.


Accounting

  • Accounting data about the VO users can be checked here: https://accounting.egi.eu/
  • From the EGI Accounting Portal it is possible to check the accounting metrics generated for both grid- and cloud-based resources supporting the vo.access.egi.eu VO.
  • From the top-menu click on 'Restrict View' and 'VO Admin' to check the accounting data of platform users.


EGI_AoDs_Accounting.png

Policies & documents


Useful Links


Roadmap

  • Integration of the JupyterHub as a Service (mid 2018)
  • Upgrade of the CSG to Liferay 7 to use the Future Gateway (FG) API server developed in the context of the INDIGO-DataCloud project (2019)
  • Integration the open-source serverless Computing Platform for Data-Processing Applications
  • Improve the user's experience in the EC3 portal (mid 2018)
  • Integrate the HNSciCloud voucher schemes in the EC3 portal (2018)
  • Joined the IN2P3-IRES cloud provider in the vo.access.egi.eu VO (2019)
  • Configure no.1 instance of PROMINENCE service for the vo.access.egi.eu VO (2019)
  • Create an Ansible receipt to run big data workflows with ECAS/Ophidia framework in EGI (2019)
  • Agree OLAs with additional cloud providers of the EGI Federation (2019)