Configuration Database GOCDB Availability and Continuity Plan

From EGIWiki
Revision as of 17:13, 30 March 2021 by Apaolini (talk | contribs) (Revision History)
Jump to: navigation, search
Main EGI.eu operations services Support Documentation Tools Activities Performance Technology Catch-all Services Resource Allocation Security


Documentation menu: Home Manuals Procedures Training Other Contact For: VO managers Administrators


Back to main page: Services Availability Continuity Plans

Introduction

This page reports on the Availability and Continuity Plan for the GOCDB and it is the result of the risks assessment conducted for this service: a series of risks and treats has been identified and analysed, along with the correspondent countermeasures currently in place. Whenever a countermeasure is not considered satisfactory for either avoiding or reducing the likelihood of the occurrence of a risk, or its impact, it is agreed with the service provider a new treatment for improving the availability and continuity of the service. The process is concluded with an availability and continuity test.

Last Next
Risks assessment 2020-02-25 2021 March
Av/Co plan 2020-02-26 2021 March

Previous plans are collected here: https://documents.egi.eu/secure/ShowDocument?docid=3537

Availability requirements and performance

TO REVIEW

In the OLA it was agreed the following performances targets, on a monthly basis:

  • Availability: 99%
  • Reliability 99%

Other availability requirements:

  • the service is accessible through X509 certificate and EGI Check-in
  • the service is accessible via webUI and API

The service availability is regularly tested by nagios probes (eu.egi.CertValidity, org.nagios.GOCDB-PortCheck, org.nagiosexchange.GOCDB-PI, org.nagiosexchange.GOCDB-WebCheck): https://argo-mon.egi.eu/nagios/cgi-bin/status.cgi?servicegroup=SERVICE_egi.GOCDB&style=detail

The performances reports in terms of Availability and Reliability are produced by ARGO on an almost real time basis and they are also periodically collected into the Documentation Database.

Over the past years, GOCDB hadn't particular Av/Co issues highlighted by the performances that need to be further investigated.

Risks assessment and management

For more details, please look at the google spreadsheet. We will report here a summary of the assessment.

Risks analysis

TO REVIEW

Risk id Risk description Affected components Established measures Risk level Treatment Expected duration of downtime / time for recovery
1 Service unavailable / loss of data due to hardware failure Web portal and PI, Database Reduce the impact: GOCDB has a separate failover instance. If the web server or database machines failed and were not readily restartable the failover would be engaged. Low In the case of a prolonged outage of the production hardware (+2 days), the failover would be set to read write mode, allowing the data to be updated. In the event of production hardware failure, the service would be restored incrementally
  • 0.5 working days: a broadcast would be made providing the URL to the read only failover
  • 1 working day: The DNS entry for goc.egi.eu would be changed to point at the read only failover
  • 2 working days: The failover would be set to read write mode
2 Service unavailable / loss of data due to software failure Web portal and PI Reduce the impact: GOCDB has a separate failover instance. If the web server or GOCDB software failed and were not readily restartable the failover would be engaged. As the database is unaffected, all data would eventually be propagated to the failover within an hour - or the failover could be pointed at the database. Low In the case of a prolonged outage of the production hardware (+2 days), the failover would be set to read write mode, allowing the data to be updated. In the event of production hardware failure, the service would be restored incrementally
  • 0.5 working days: a broadcast would be made providing the URL to the read only failover
  • 1 working day: The DNS entry for goc.egi.eu would be changed to point at the read only failover
  • 2 working days: The failover would be set to read write mode
3 service unavailable / loss of data due to human error Web portal and PI, Database Reduce the likelihood: Access to production systems is limited, service and procedures documented internally.

Reduce the impact: Configuration of web portal and PI is backed up daily, database backed up hourly for failover.

Low In the case of a prolonged outage of the production service (+2 days) due to human error, the failover would be set to read write mode, allowing the data to be updated. In the event of production hardware failure, the service would be restored incrementally
  • 0.5 working days: a broadcast would be made providing the URL to the read only failover
  • 1 working day: The DNS entry for goc.egi.eu would be changed to point at the read only failover
  • 2 working days: The failover would be set to read write mode
4 service unavailable for network failure (Network outage with causes external of the site) Web portal and PI, Database Reduce the impact: GOCDB has a separate failover instance at a STFC data centre located at a different site. If the was a network failure at RAL and was not readily restartable the failover would be engaged. Low In the case of a prolonged outage of the network (+2 days), the failover would be set to read write mode, allowing the data to be updated. In the event of production hardware failure, the service would be restored incrementally
  • 0.5 working days: a broadcast would be made providing the URL to the read only failover
  • 1 working day: The DNS entry for goc.egi.eu would be changed to point at the read only failover
  • 2 working days: The failover would be set to read write mode
5 Unavailability of key technical and support staff (holidays period, sickness, ...) Web portal and PI, Database Reduce the likelihood: GOCDB technical / support staff try not to be on leave at the same time for a prolonged period. A cover person, capable of handling tickets and investigating minor problems, is assigned in the case of short term absences or an 'At Risk' is declared. Low Service would be restored on the return of technical / support staff The length of the absence / declared downtime
6 Major disruption in the data centre. Fire, flood or electric failure for example Web portal and PI, Database Reduce the impact: GOCDB runs on UPS so should remain operational in the event of electrical failure. In the case of other, long lasting, data centre disruption, the failover would be engaged. Low In the case of a prolonged disruption (+2 days) to the data centre affecting GOCDB, the failover would be set to read write mode, allowing the data to be updated. In the event of production hardware failure, the service would be restored incrementally
  • 0.5 working days: a broadcast would be made providing the URL to the read only failover
  • 1 working day: The DNS entry for goc.egi.eu would be changed to point at the read only failover
  • 2 working days: The failover would be set to read write mode
7 Major security incident. The system is compromised by external attackers and needs to be reinstalled and restored. Web portal and PI, Database Reduce the impact: The failover would be engaged while security incidents are being investigated Low "In the case of a prolonged outage (+2 days) of the production GOCDB, the failover would be set to read write mode, allowing the data to be updated." In the event of production hardware failure, the service would be restored incrementally
  • 0.5 working days: a broadcast would be made providing the URL to the read only failover
  • 1 working day: The DNS entry for goc.egi.eu would be changed to point at the read only failover
  • 2 working days: The failover would be set to read write mode
8 (D)DOS attack. The service is unavailable because of a coordinated DDOS. Web portal and PI Reduce the impact: The number of requests over time are monitored so any abnormal increase in traffic or response time would be noticed Low ability to block offending IPs Up to 4 hours

Outcome

TO REVIEW

The level of all the identified risks is acceptable and the countermeasures already adopted are considered satisfactory

Additional information

TO REVIEW

  • An internal recover procedure is available to the staff
  • The Availability targets don't change in case the plan is invoked.
  • Recovery requirements:
    • Maximum tolerable period of disruption (MTPoD) (the maximum amount of time that a service can be unavailable or undelivered after an event that causes disruption to operations, before its stakeholders perceive unacceptable consequences): 2 days
    • Recovery time objective (RTO) (the acceptable amount of time to restore the service in order to avoid unacceptable consequences associated with a break in continuity (this has to be less than MTPoD)): 1 day
    • Recovery point objective (RPO) (the acceptable latency of data that will not be recovered): 2 days
  • approach for the return to normal working conditions as reported in the risk assessment.

Availability and Continuity test

Given the use of an off-site failover, it was agreed with the provider that testing a recovery scenario is not currently realistic.

In the event that the service at RAL disappears, the DNS entry would be flipped to point at the failover in DL, whilst reinstalling locally if needed. The failover at DL is accessible here: https://goc.dl.ac.uk . The provider updated its DNS entry to point at its load balancers using the scripts for the failover, so they are confident these work.

There is an on-going work to move GOCDB to different internal infrastructure: this will introduce configuration management to GOCDB for the first time. As such whatever reinstall process the provider was to test today would be obsolete in a matter of months.

Once the movement have been made, it will be evaluated the execution of a recovery test.

A pre-production instance is being deployed at gocdb-preprod.egi.eu.

Revision History

Version Authors Date Comments

Alessandro Paolini 2018-08-10 first draft, discussing with the provider

Alessandro Paolini 2018-11-02 plan finalised

Alessandro Paolini 2019-11-19 starting the yearly review....

Alessandro Paolini 2020-02-26 review completed

Alessandro Paolini 2021-03-30 starting the yearly review....