AppDB Availability and Continuity Plan

From EGIWiki
Revision as of 16:00, 27 January 2020 by Wvkarag (talk | contribs) (Performances)
Jump to: navigation, search
Main EGI.eu operations services Support Documentation Tools Activities Performance Technology Catch-all Services Resource Allocation Security


Documentation menu: Home Manuals Procedures Training Other Contact For: VO managers Administrators


Back to main page: Services Availability Continuity Plans

Introduction

This page reports on the Availability and Continuity Plan for the AppDB and it is the result of the risks assessment conducted for this service: a series of risks and treats has been identified and analysed, along with the correspondent countermeasures currently in place. Whenever a countermeasure is not considered satisfactory for either avoiding or reducing the likelihood of the occurrence of a risk, or its impact, it is agreed with the service provider a new treatment for improving the availability and continuity of the service. The process is concluded with an availability and continuity test.

Last Next
Risks assessment 2018-10-26 2019 October
Av/Co plan and test 2018-11-23 2019 November

Previous plans are collected here: https://documents.egi.eu/secure/ShowDocument?docid=3544

Performances

In the OLA it was agreed the following performances targets, on a monthly basis:

  • Availability: 95%
  • Reliability 95%
 Other availability requirements:
- the service is accessible through EGI Check-in, which in turn also offers access via a x509 certificate by selecting the "IGTF Proxy certificate" option
- The service is accessible via webUI and offers a RESTful API
- (depending on the service, specific requirements can be identified. In case, for each requirement report what is the action/measure in case of failure)

The service availability is regularly tested by nagios probe eu.egi.CertValidity and org.nagiosexchange.AppDB-WebCheck: https://argo-mon.egi.eu/nagios/cgi-bin/status.cgi?host=appdb.egi.eu&style=detail

The performances reports in terms of Availability and Reliability are produced by ARGO on an almost real time basis and they are also periodically collected into the Documentation Database.

Over the past years, the AppDB hadn't particular Av/Co issues highlighted by the performances that need to be further investigated.

Risks assessment and management

For more details, please look at the google spreadsheet. We will report here a summary of the assessment.

Risks analysis

 to update 
Risk id Risk description Affected components Established measures Risk level Expected duration of downtime / time for recovery Comment
1 Service unavailable / loss of data due to hardware failure All All services are running on virtual machines. In case of hardware failure of the host machine the virtual machine can be re-instantiated in another hypervisor in the private cloud. Daily backups of the service including database data. Low up to 8 hours (1 working day) the measures already in place are considered satisfactory and risk level is acceptable
2 Service unavailable / loss of data due to software failure All Restoring of the codebase via internal source code repositories Low up to 8 hours (1 working day) the measures already in place are considered satisfactory and risk level is acceptable
3 service unavailable / loss of data due to human error All Restoring of the codebase via internal source code repositories and data from the backup service Medium up to 4 hours (half working day) the measures already in place are considered satisfactory and risk level is acceptable
4 service unavailable for network failure (Network outage with causes external of the site) Web front/public APIs University of Athens which acts as an ISP to IASA, has redundant network connectivity. Low up to 4 hours (half working day) the measures already in place are considered satisfactory and risk level is acceptable
5 Unavailability of key technical and support staff (holidays period, sickness, ...) All AppDB technical/support staff try not to be on leave at the same time. Even if that happened (typically during August), there is always MOD (manager on duty) and secure remote access is always available. Low 1 or more working days the measures already in place are considered satisfactory and risk level is acceptable
6 Major disruption in the data centre. Fire, flood or electric failure for example All All virtual machines and data are backuped in a daily basis. In addition critical components like, backup servers, common storage, network devices etc. are supported by a UPS and power generator. Low 1 or more working days the measures already in place are considered satisfactory and risk level is acceptable
7 Major security incident. The system is compromised by external attackers and needs to be reinstalled and restored. Frontend and DB All virtual machines, data and code are backuped in a daily basis. Low up to 8 hours (1 working day) the measures already in place are considered satisfactory and risk level is acceptable
8 (D)DOS attack. The service is unavailable because of a coordinated DDOS. Web front/public APIs Local network team provides monitoring and protection for DOS attacks, firewall can limit impact of the DDoS Low up to 4 hours (half working day) the measures already in place are considered satisfactory and risk level is acceptable

Outcome

 to update 

The level of all the identified risks is acceptable and the countermeasures already adopted are considered satisfactory

Additional information

- procedures for the several countermeasures to invoke in case of risk occurrence (put a link if public)

- the Availability targets don't change in case the plan is invoked.

- recovery requirements:
-- Maximum tolerable period of disruption (MTPoD) (the maximum amount of time that a service can be unavailable or undelivered after an event that causes disruption to operations, before its stakeholders perceive unacceptable consequences): 5 days
-- Recovery time objective (RTO) (the acceptable amount of time to restore the service in order to avoid unacceptable consequences associated with a break in continuity (this has to be less than MTPoD)): 2 days
-- Recovery point objective (RPO) (the acceptable latency of data that will not be recovered): 5 days

- approach for the return to normal working conditions as reported in the risk assessment.

Availability and Continuity test

 to update 

The proposed A/C test will focus on a recovery scenario: the service is supposed to have been disrupted and needs to be reinstalled from scratch. The last backup of the data will be used for restoring the service, verifying how much information will be lost, and the time spent will be measured.

Performing this test will be useful to spot any issue in the recovery procedures of the service.

Test details and outcome

The recovery test has been done on the development instance:

  • poweroff the AppDB-Dev VM - 1 min
  • retrieve/decompress/register the latest VM image from the backup - 20 mins
  • instantiate the VM - 5 mins
  • retrieve the latest DB data from the backup - 1 mins
  • data recovery completed - 2 mins
  • recovered machine rebooted and services on line 1 min

Total: 30mins

The test can be considered successful: recovering the service took relatively few time, the infrastructure will not suffer serious consequences if the recovery time will be so short in case of a real disruption.

Revision History

Version Authors Date Comments

Alessandro Paolini 2018-10-29 first draft, discussing with the provider

Alessandro Paolini 2018-11-23 test performed, plan finalised

Alessandro Paolini 2019-11-26 starting the yearly review....