Difference between revisions of "Service for AAI: EGI Catchall Availability and continuity Plan"

From EGIWiki
Jump to: navigation, search
(Performances)
(Revision History)
 
(26 intermediate revisions by 2 users not shown)
Line 15: Line 15:
 
! scope="row"| Risks assessment
 
! scope="row"| Risks assessment
 
| 2018-10-24
 
| 2018-10-24
| -
+
| Q4 2020
 
|-
 
|-
 
! scope="row"| Av/Co plan
 
! scope="row"| Av/Co plan
| in progress
+
| 2018-11-07
| -
+
| Q4 2020
 
|-
 
|-
 
|}
 
|}
 +
 +
Previous plans are collected here: https://documents.egi.eu/secure/ShowDocument?docid=3543
  
 
= Performances =
 
= Performances =
 
The performances reports in terms of Availability and Reliability are produced by [http://argo.egi.eu/ar-opsmon ARGO] on an almost real time basis and they are also periodically collected into the [https://documents.egi.eu/public/ShowDocument?docid=2324 Documentation Database].
 
  
 
In the OLA it was agreed the following performances targets, on a monthly basis:
 
In the OLA it was agreed the following performances targets, on a monthly basis:
 
*Availability: 95%
 
*Availability: 95%
 
*Reliability 99%
 
*Reliability 99%
 +
 +
Other availability requirements:
 +
* the service is accessible through X509 certificate
 +
* The service is accessible via CLI and/or webUI
 +
 +
The service availability is regularly tested by nagios probe ''eu.egi.VOMS-CertValidity'': https://argo-mon.egi.eu/nagios/cgi-bin/status.cgi?host=voms2.hellasgrid.gr&style=detail
 +
 +
The performances reports in terms of Availability and Reliability are produced by [http://egi.ui.argo.grnet.gr/egi/OPS-MONITOR-Critical ARGO] on an almost real time basis and they are also periodically collected into the [https://documents.egi.eu/public/ShowDocument?docid=2324 Documentation Database].
  
 
Over the past years, '''EGI Catchall''' hadn't particular Av/Co issues highlighted by the performances that need to be further investigated.
 
Over the past years, '''EGI Catchall''' hadn't particular Av/Co issues highlighted by the performances that need to be further investigated.
Line 35: Line 43:
 
= Risks assessment and management =
 
= Risks assessment and management =
  
For more details, please look at the [https://docs.google.com/spreadsheets/d/1eww6AtARDnNEWerZgG0rjoAOwSpRLDQU-Ps02sr_pPs/edit#gid=1565906040 google spreadsheet]. We will report here a summary of the assessment.
+
For more details, please look at the [https://docs.google.com/spreadsheets/d/18d2kEdrXFScD2U-JDD344TFx82iCw8WGo0pEWHDw0j8/edit#gid=754960400 google spreadsheet]. We will report here a summary of the assessment.
 
 
== Risks analysis (to update) ==
 
  
 +
== Risks analysis ==
 
{| class="wikitable"
 
{| class="wikitable"
 
! Risk id
 
! Risk id
Line 116: Line 123:
  
 
The level of all the identified risks is acceptable and the countermeasures already adopted are considered satisfactory
 
The level of all the identified risks is acceptable and the countermeasures already adopted are considered satisfactory
 +
 +
== Additional information ==
 +
 +
* procedures for the several countermeasures to invoke in case of risk occurrence are available to the component provider
 +
* the Availability targets don't change in case the plan is invoked.
 +
* recovery requirements:
 +
** Maximum tolerable period of disruption (MTPoD) (the maximum amount of time that a service can be unavailable or undelivered after an event that causes disruption to operations, before its stakeholders perceive unacceptable consequences): 5 days
 +
** Recovery time objective (RTO) (the acceptable amount of time to restore the service in order to avoid unacceptable consequences associated with a break in continuity (this has to be less than MTPoD)): 3 days
 +
** Recovery point objective (RPO) (the acceptable latency of data that will not be recovered): n.a.
 +
* approach for the return to normal working conditions as reported in the risk assessment.
  
 
= Availability and Continuity test =
 
= Availability and Continuity test =
  
Given the use of an off-site failover, it was agreed with the provider that testing a recovery scenario is not currently realistic.
+
The critical level of this service is low.
 
In the event that the service at RAL disappears, the DNS entry would we flipped to point at the failover in DL, whilst reinstalling locally if needed. The failover at DL is accessible here: https://goc.dl.ac.uk .
 
The provider recently updated its DNS entry to point at its load balancers using the scripts for the failover, so they are confident these work.
 
 
One of provider major pieces of work for this year is moving the GOCDB to a different internal infrastructure. This will introduce configuration management to GOCDB for the first time. As such whatever reinstall process they were to test today would be obsolete in a matter of months.
 
  
Once the movement have been made, it will be evaluated the execution of a recovery test.
+
Taking into account the availability and continuity measures adopted by the provider, and considering that the VOMS and MyProxy services are quite easy to install, we don't require a continuity/recovery test for this service: the team has extensive experience with VOMS and is able to to install a new instance rapidly if its needed.
  
 
= Revision History  =
 
= Revision History  =
Line 142: Line 154:
 
| first draft, discussing with the provider
 
| first draft, discussing with the provider
 
|-
 
|-
|
+
| <br>
 +
| Alessandro Paolini
 +
| 2018-11-07
 +
| plan finalised
 +
|-
 +
| <br>
 +
| Alessandro Paolini
 +
| 2019-11-26
 +
| starting the yearly review....
 +
|-
 +
| <br>
 +
| Alessandro Paolini
 +
| 2020-03-18
 +
| review completed, no changes
 +
|-
 +
|  
 
|  
 
|  
 
|  
 
|  
 
|  
 
|  
 
|}
 
|}

Latest revision as of 17:36, 18 March 2020

Main EGI.eu operations services Support Documentation Tools Activities Performance Technology Catch-all Services Resource Allocation Security


Documentation menu: Home Manuals Procedures Training Other Contact For: VO managers Administrators


Back to main page: Services Availability Continuity Plans

Introduction

This page reports on the Availability and Continuity Plan for the EGI Catchall and it is the result of the risks assessment conducted for this service: a series of risks and treats has been identified and analysed, along with the correspondent countermeasures currently in place. Whenever a countermeasure is not considered satisfactory for either avoiding or reducing the likelihood of the occurrence of a risk, or its impact, it is agreed with the service provider a new treatment for improving the availability and continuity of the service. The process is concluded with an availability and continuity test.

Last Next
Risks assessment 2018-10-24 Q4 2020
Av/Co plan 2018-11-07 Q4 2020

Previous plans are collected here: https://documents.egi.eu/secure/ShowDocument?docid=3543

Performances

In the OLA it was agreed the following performances targets, on a monthly basis:

  • Availability: 95%
  • Reliability 99%

Other availability requirements:

  • the service is accessible through X509 certificate
  • The service is accessible via CLI and/or webUI

The service availability is regularly tested by nagios probe eu.egi.VOMS-CertValidity: https://argo-mon.egi.eu/nagios/cgi-bin/status.cgi?host=voms2.hellasgrid.gr&style=detail

The performances reports in terms of Availability and Reliability are produced by ARGO on an almost real time basis and they are also periodically collected into the Documentation Database.

Over the past years, EGI Catchall hadn't particular Av/Co issues highlighted by the performances that need to be further investigated.

Risks assessment and management

For more details, please look at the google spreadsheet. We will report here a summary of the assessment.

Risks analysis

Risk id Risk description Affected components Established measures Risk level Treatment Expected duration of downtime / time for recovery
1 Service unavailable / loss of data due to hardware failure VOMS database, MyProxy proxy files The VOMS database and the MyProxy proxy directory are regularly backuped. Additionally, they are running on a redundant VM environment, on a redundant storage Low Service restored from backups 1 working day
2 Service unavailable / loss of data due to software failure VOMS database, MyProxy proxy files The VOMS database and the MyProxy proxy directory are regularly backuped. Additionally, they are running on a redundant VM environment, on a redundant storage Low Service restored from backups 1 working day
3 service unavailable / loss of data due to human error VOMS database, MyProxy proxy files The VOMS database and the MyProxy proxy directory are regularly backuped. Additionally, they are running on a redundant VM environment, on a redundant storage Low Service restored from backups 1 working day
4 service unavailable for network failure (Network outage with causes external of the site) VOMS, MyProxy There are redundant uplinks connecting the data center to the public network. Medium Service restored from backups 1 working day
5 Unavailability of key technical and support staff (holidays period, sickness, ...) VOMS, MyProxy There is always one member of the team available. Low - -
6 Major disruption in the data centre. Fire, flood or electric failure for example VOMS database, MyProxy proxy files The data center is well maintained, with UPS, diesel generator and fire suppression system. Medium Service restored from backups 1 or more working days
7 Major security incident. The system is compromised by external attackers and needs to be reinstalled and restored. VOMS database, MyProxy proxy files The VOMS database and the MyProxy proxy directory are regularly backuped. Low Service restored from backups 1 working day
8 (D)DOS attack. The service is unavailable because of a coordinated DDOS. VOMS, MyProxy The network is monitored for DDOS attacks by GRNET NOC. Medium Service restored from backups 1 working day

Outcome

The level of all the identified risks is acceptable and the countermeasures already adopted are considered satisfactory

Additional information

  • procedures for the several countermeasures to invoke in case of risk occurrence are available to the component provider
  • the Availability targets don't change in case the plan is invoked.
  • recovery requirements:
    • Maximum tolerable period of disruption (MTPoD) (the maximum amount of time that a service can be unavailable or undelivered after an event that causes disruption to operations, before its stakeholders perceive unacceptable consequences): 5 days
    • Recovery time objective (RTO) (the acceptable amount of time to restore the service in order to avoid unacceptable consequences associated with a break in continuity (this has to be less than MTPoD)): 3 days
    • Recovery point objective (RPO) (the acceptable latency of data that will not be recovered): n.a.
  • approach for the return to normal working conditions as reported in the risk assessment.

Availability and Continuity test

The critical level of this service is low.

Taking into account the availability and continuity measures adopted by the provider, and considering that the VOMS and MyProxy services are quite easy to install, we don't require a continuity/recovery test for this service: the team has extensive experience with VOMS and is able to to install a new instance rapidly if its needed.

Revision History

Version Authors Date Comments

Alessandro Paolini 2018-11-02 first draft, discussing with the provider

Alessandro Paolini 2018-11-07 plan finalised

Alessandro Paolini 2019-11-26 starting the yearly review....

Alessandro Paolini 2020-03-18 review completed, no changes