Alert.png The wiki is deprecated and due to be decommissioned by the end of September 2022.
The content is being migrated to other supports, new updates will be ignored and lost.
If needed you can get in touch with EGI SDIS team using operations @ egi.eu.

Difference between revisions of "Service for AAI: EGI CheckIn Availability and continuity Plan"

From EGIWiki
Jump to navigation Jump to search
(6 intermediate revisions by the same user not shown)
Line 14: Line 14:
|-
|-
! scope="row"| Risk assessment
! scope="row"| Risk assessment
| 2018-11-05
| 2020-11-30
|
| Dec 2021
|-
|-
! scope="row"| Av/Co plan and test
! scope="row"| Av/Co plan and test
| 2018-12-04
| 2020-12-01
|  
| Dec 2021
|-
|-
|}
|}
Line 42: Line 42:
= Risks assessment and management =
= Risks assessment and management =


For more details, please look at the [https://docs.google.com/spreadsheets/d/18d2kEdrXFScD2U-JDD344TFx82iCw8WGo0pEWHDw0j8/edit#gid=1771830324 google spreadsheet]. We will report here a summary of the assessment.
For more details, please look at the [https://docs.google.com/spreadsheets/d/1DJAYenmm_5xHwkHGx0w10MbfbXxEEFbMe_-VLL_cmoc/edit#gid=1469603953 google spreadsheet]. We will report here a summary of the assessment.


== Risks analysis ==
== Risks analysis ==
Line 124: Line 124:


== Outcome ==
== Outcome ==
<pre style="color: blue"> TO REVIEW </pre>
 
The level of all the identified risks is acceptable and the countermeasures already adopted are considered satisfactory
The level of all the identified risks is acceptable and the countermeasures already adopted are considered satisfactory


== Additional information ==
== Additional information ==
<pre style="color: blue">
procedures for the several countermeasures to invoke in case of risk occurrence (put a link if public)


Specify if the Availability targets are different in case the plan is invoked.
* The processes for recovering the service components are documented in GRNET's internal issue management system (JIRA)
** The deployment/reconfiguration process is automated using ansible. The ansible roles/playbooks are publicly available from https://github.com/rciam/rciam-deploy/tree/devel but the host inventories and group variables for each Check-in installation (production, demo, devel) are maintained in GRNET's private GitHub repo which is only accessible to Check-in team members.
** Sensitive configuration items (host/robot certificate keys, SAML message signing/encryption keys, application/database passwords) are encrypted using ansible-vault.
* The Availability and Reliability targets don't change in case the plan is invoked
* Approach for the return to normal working conditions as reported in the risk assessment.
* The support unit '''Check-in (AAI)''' shall be used to report any incident or service request
* The providers can contact EGI Operations via ticket or email in case the continuity plan is invoked, or to discuss any change to it.


Recovery requirements:
Recovery requirements:
*Maximum tolerable period of disruption (MTPoD) (the maximum amount of time that a service can be unavailable or undelivered after an event that causes disruption to operations, before its stakeholders perceive unacceptable consequences): 1 day
* Maximum tolerable period of disruption (MTPoD) (the maximum amount of time that a service can be unavailable or undelivered after an event that causes disruption to operations, before its stakeholders perceive unacceptable consequences): 1 day
*Recovery time objective (RTO) (the acceptable amount of time to restore the service in order to avoid unacceptable consequences associated with a break in continuity (this has to be less than MTPoD)): 4 hours
* Recovery time objective (RTO) (the acceptable amount of time to restore the service in order to avoid unacceptable consequences associated with a break in continuity (this has to be less than MTPoD)): 4 hours
*Recovery point objective (RPO) (the acceptable latency of data that will not be recovered): n.a.
* Recovery point objective (RPO) (the acceptable latency of data that will not be recovered): n.a.


Specify if there is a particular approach for the return to normal working conditions if different from what already reported in the risk assessment.
= Availability and Continuity test =


The support unit '''Check-in (AAI)''' shall be used to report any incident or service request
The proposed A/C test will focus on a recovery scenario: the service has been disrupted and needs to be reinstalled from scratch.


The providers can contact EGI Operations via ticket or email in case the continuity plan is invoked, or to discuss any change to it.
The time spent for restoring the service will be measured, using the last backup of the data stored in it and evaluating any eventual loss of information.
</pre>


= Availability and Continuity test =
The proposed A/C test will focus on a recovery scenario: the service has been disrupted and needs to be reinstalled from scratch.
The time spent for restoring the service will be measured, using the last backup of the data stored in it and evaluating any eventual loss of information
Performing this test will be useful to spot any issue in the recovery procedures of the service.
Performing this test will be useful to spot any issue in the recovery procedures of the service.


Line 184: Line 183:
| <br>
| <br>
| Alessandro Paolini
| Alessandro Paolini
| 2020-08-24
| 2020-12-01
| starting the review...
| review of the plan completed, no need of a new continuity/recovery test; updated the sections "Performance" and "Additional information".
|-
| <br>
| Alessandro Paolini
| 2021-02-09
| updated the link to the risk assessment
|-
|-
|
|

Revision as of 16:52, 9 February 2021

Main EGI.eu operations services Support Documentation Tools Activities Performance Technology Catch-all Services Resource Allocation Security


Documentation menu: Home Manuals Procedures Training Other Contact For: VO managers Administrators


Back to main page: Services Availability Continuity Plans

Introduction

This page reports on the Availability and Continuity Plan for CheckIn and it is the result of the risk assessment conducted for this service: a series of risks and threats has been identified and analysed, along with the correspondent countermeasures currently in place. Whenever a countermeasure is not considered satisfactory for either avoiding or reducing the likelihood of the occurrence of a risk, or its impact, it is agreed with the service provider a new treatment for improving the availability and continuity of the service. The process is concluded with an availability and continuity test.

Last Next
Risk assessment 2020-11-30 Dec 2021
Av/Co plan and test 2020-12-01 Dec 2021

Previous plans are collected here: https://documents.egi.eu/document/3649

Performances

In the OLA it was agreed the following performances targets, on a monthly basis:

  • Availability: 99%
  • Reliability 99%

Other availability requirements:

  • The service is accessible through X509 certificate and Institutional Accounts
  • The service is accessible via CLI and/or webUI

The service availability is regularly tested by nagios probes (eu.egi.CertValidity, org.nagios.IdP-DiscoveryService, org.nagios.OIDC-AuthZ, org.nagios.OIDC-Provider-Config, org.nagios.SAML-IdP, org.nagios.SAML-SP, org.nagios.TTS-MasterPortal-Config, org.nagios.TTS-MasterPortal-Register): https://argo-mon.egi.eu/nagios/cgi-bin/status.cgi?host=aai.egi.eu&style=detail

The performances reports in terms of Availability and Reliability are produced by ARGO on an almost real time basis and they are also periodically collected into the Documentation Database.

Over the past years, Check-in hadn't particular Av/Co issues highlighted by the performances that need to be further investigated.

Risks assessment and management

For more details, please look at the google spreadsheet. We will report here a summary of the assessment.

Risks analysis

Risk id Risk description Affected components Established measures Risk level Treatment Expected duration of downtime / time for recovery
1 Service unavailable / loss of data due to hardware failure All All services are running on virtual machines.

In case of hardware failure of the host machine the virtual machine can be re-instantiated in another hypervisor in the private cloud. High availability deployment will reduce outages due to hardware failures to almost zero.

Low Restore service from the backups on the new hosts which are not affected by the failure. Almost Zero for the case of hardware failure of a host machine affecting virtual machines running AAI service components other than the load balancer/ssl terminator. If the load balancer/ssl terminator service component is affected, then the downtime may take up to 4 hours to allow for the propagation of DNS changes required to point to the new load balancer/ssl terminator instance.
2 Service unavailable / loss of data due to software failure All All services/data sturctures are running on virtual machines and in high availability mode.

In case of software failure on a virtual machine the back up installation will take over. If the software is unavailable in all instances the a new virtual machine would be spawned in GRNET's private environment. High availability deployment will reduce outages due to software failures to almost zero.

Medium Restore service from the backups, log data analysis in order to find a reason of the software failure. Zero for the case of partial failure. 3-4 work hours if a new virtual machine need to be spawned.
3 service unavailable / loss of data due to human error All All services/data structures are running on virtual machines and in high availability mode.

In case of human error affecting the operation of either a virtual machine or a service component, the back up installation will take over. If the affected service component is unavailable in all instances then a new virtual machine would be spawned in GRNET's private environment. High availability deployment will minimise outages due to human errors.

Medium Restore service/data from backups. Revise actions done which led to the failure and modification of the processes in order to not allow the same error again. Up to 8 hours
4 service unavailable for network failure (Network outage with causes external of the site) All GRNET has redundant network connectivity Low Standard mitigation and recovery procedures made by network operators. Almost Zero. 3-4 working hours in case of maintenance
5 Unavailability of key technical and support staff (holidays period, sickness, ...) All At least one person from the core team (5 team members) is still available. Low 1 or more working days
6 Major disruption in the data centre. Fire, flood or electric failure for example All The computing centre has electric backup system and fire control devices. In case of an occurrence despite the controls, the virtual machine can be instantiated elsewhere. Low Restore service from the backups on the new hosts which are not affected by the failure. 1 or more working days
7 Major security incident. The system is compromised by external attackers and needs to be reinstalled and restored. All The backend database store is operated in clustered mode, supporting streaming replication and Point-in-Time Recovery for a period of six months (minimum). Daily backups are also executed. Backups are stored in a separate system and can be restored at once, losing up to 24 hours of data. Low Restore service from backups on a new hosts. 3-4 work hours. In case new host certificates are required, up to 1 day.
8 (D)DOS attack. The service is unavailable because of a coordinated DDOS. All GRNET network provides protection for DOS attacks, firewall can limit impact of the DDoS Low Utilization of DDOS protection on the network level provided by GRNET. Depending on the attack, few hours maximum

Outcome

The level of all the identified risks is acceptable and the countermeasures already adopted are considered satisfactory

Additional information

  • The processes for recovering the service components are documented in GRNET's internal issue management system (JIRA)
    • The deployment/reconfiguration process is automated using ansible. The ansible roles/playbooks are publicly available from https://github.com/rciam/rciam-deploy/tree/devel but the host inventories and group variables for each Check-in installation (production, demo, devel) are maintained in GRNET's private GitHub repo which is only accessible to Check-in team members.
    • Sensitive configuration items (host/robot certificate keys, SAML message signing/encryption keys, application/database passwords) are encrypted using ansible-vault.
  • The Availability and Reliability targets don't change in case the plan is invoked
  • Approach for the return to normal working conditions as reported in the risk assessment.
  • The support unit Check-in (AAI) shall be used to report any incident or service request
  • The providers can contact EGI Operations via ticket or email in case the continuity plan is invoked, or to discuss any change to it.

Recovery requirements:

  • Maximum tolerable period of disruption (MTPoD) (the maximum amount of time that a service can be unavailable or undelivered after an event that causes disruption to operations, before its stakeholders perceive unacceptable consequences): 1 day
  • Recovery time objective (RTO) (the acceptable amount of time to restore the service in order to avoid unacceptable consequences associated with a break in continuity (this has to be less than MTPoD)): 4 hours
  • Recovery point objective (RPO) (the acceptable latency of data that will not be recovered): n.a.

Availability and Continuity test

The proposed A/C test will focus on a recovery scenario: the service has been disrupted and needs to be reinstalled from scratch.

The time spent for restoring the service will be measured, using the last backup of the data stored in it and evaluating any eventual loss of information.

Performing this test will be useful to spot any issue in the recovery procedures of the service.

  • Start of recovery scenario test:
Tue Dec 4 09:09:02 CET 2018 One of the two VMs running the IdP/SP Proxy service instance has been disrupted and needs to be reinstalled from scratch.
  • End of test recovery scenario:
Tue Dec 4 10:40:01 CET 2018: We assumed a scenario where the second IdP/SP proxy node faced a hardware failure and a new VM had to be spawned in order to restore it and able
to provide the Check-in service in high availability/load balancing mode.

The process of spawning the new VM started at 09:25 CET (we allocated approx. 15 minutes for simulating the analysis since the detection of the VM failure at 09:09 CET).

It should be noted that during the 1,5 hours that the IdP/SP Proxy node was down the existing user sessions were not affected and new user sessions could be established without any noticeable delay.

The test showed that in case of failure of one of the IDP/SP proxy nodes, there is no impact on the users.

Revision History

Version Authors Date Comments

Alessandro Paolini 2018-11-09 first draft

Alessandro Paolini 2018-12-04 plan finalised

Alessandro Paolini 2020-12-01 review of the plan completed, no need of a new continuity/recovery test; updated the sections "Performance" and "Additional information".

Alessandro Paolini 2021-02-09 updated the link to the risk assessment