Alert.png The wiki is deprecated and due to be decommissioned by the end of September 2022.
The content is being migrated to other supports, new updates will be ignored and lost.
If needed you can get in touch with EGI SDIS team using operations @ egi.eu.

Difference between revisions of "AppDB Availability and Continuity Plan"

From EGIWiki
Jump to navigation Jump to search
(25 intermediate revisions by 3 users not shown)
Line 15: Line 15:
|-
|-
! scope="row"| Risks assessment
! scope="row"| Risks assessment
| 2018-10-26
| 2020-01-27
| 2019 October
| 2021 January
|-
|-
! scope="row"| Av/Co plan and test
! scope="row"| Av/Co plan and test
| 2018-11-23
| 2018-11-23
| 2019 November
| --
|-
|-
|}
|}
Previous plans are collected here: https://documents.egi.eu/secure/ShowDocument?docid=3544


= Performances =
= Performances =
The performances reports in terms of Availability and Reliability are produced by [http://egi.ui.argo.grnet.gr/egi/OPS-MONITOR-Critical ARGO] on an almost real time basis and they are also periodically collected into the [https://documents.egi.eu/public/ShowDocument?docid=2324 Documentation Database].


In the OLA it was agreed the following performances targets, on a monthly basis:
In the OLA it was agreed the following performances targets, on a monthly basis:
*Availability: 95%
*Availability: 95%
*Reliability 95%
*Reliability 95%
Other availability requirements:
*the service is accessible through EGI Check-in, which in turn also offers access via a x509 certificate by selecting the "IGTF Proxy certificate" option
*the service is accessible via webUI and offers a RESTful API
The service availability is regularly tested by nagios probe ''eu.egi.CertValidity'' and ''org.nagiosexchange.AppDB-WebCheck'': https://argo-mon.egi.eu/nagios/cgi-bin/status.cgi?host=appdb.egi.eu&style=detail
The performances reports in terms of Availability and Reliability are produced by [http://egi.ui.argo.grnet.gr/egi/OPS-MONITOR-Critical ARGO] on an almost real time basis and they are also periodically collected into the [https://documents.egi.eu/public/ShowDocument?docid=2324 Documentation Database].


Over the past years, the AppDB hadn't particular Av/Co issues highlighted by the performances that need to be further investigated.
Over the past years, the AppDB hadn't particular Av/Co issues highlighted by the performances that need to be further investigated.
Line 36: Line 44:
= Risks assessment and management =
= Risks assessment and management =


For more details, please look at the [https://docs.google.com/spreadsheets/d/1g6vR1vlG9eTIny96J2BBQ_tFBVNWp0YMk0f268G99lA/edit#gid=1565906040 google spreadsheet]. We will report here a summary of the assessment.
For more details, please look at the [https://docs.google.com/spreadsheets/d/1g6vR1vlG9eTIny96J2BBQ_tFBVNWp0YMk0f268G99lA/edit#gid=321732303 google spreadsheet]. We will report here a summary of the assessment.


== Risks analysis ==
== Risks analysis ==
<pre style="color: blue"> to update </pre>


{| class="wikitable"
{| class="wikitable"
Line 52: Line 61:
| Service unavailable / loss of data due to hardware failure
| Service unavailable / loss of data due to hardware failure
| All
| All
| All services are running on virtual machines. In case of hardware failure of the host machine the virtual machine can be re-instantiated in another hypervisor in the private cloud. Daily backups of the service including database data.
| Reactive: All services are running on virtual machines. In case of hardware failure of the host machine the virtual machine can be re-instantiated in another hypervisor in the private cloud. Daily backups of the service including database data.
| style="background: green"| Low  
| style="background: green"| Low  
| up to 8 hours (1 working day)
| up to 8 hours (1 working day)
Line 60: Line 69:
| Service unavailable / loss of data due to software failure
| Service unavailable / loss of data due to software failure
| All
| All
| Restoring of the codebase via internal source code repositories
| Preemptive:
Software changes are tested on a development instance before being rolled-out into production
 
Reactive:
Services' source code under version control in internal repositories and service data backed-up
| style="background: green"| Low  
| style="background: green"| Low  
| up to 8 hours (1 working day)
| up to 8 hours (1 working day)
Line 68: Line 81:
| service unavailable / loss of data due to human error
| service unavailable / loss of data due to human error
| All
| All
| Restoring of the codebase via internal source code repositories and data from the backup service
| Preemptive:
Staff access rights to service installations are assigned according to seniority
 
Reactive:
Services' source code under version control in internal repositories and service data backed-up
| style="background: yellow"| Medium  
| style="background: yellow"| Medium  
| up to 4 hours (half working day)
| up to 4 hours (half working day)
Line 76: Line 93:
| service unavailable for network failure (Network outage with causes external of the site)
| service unavailable for network failure (Network outage with causes external of the site)
| Web front/public APIs
| Web front/public APIs
| University of Athens which acts as an ISP to IASA, has redundant network connectivity.
| Preemptive: The University of Athens, which acts as an ISP to IASA, has redundant network connectivity.
| style="background: green"| Low  
| style="background: green"| Low  
| up to 4 hours (half working day)
| up to 4 hours (half working day)
Line 84: Line 101:
| Unavailability of key technical and support staff (holidays period, sickness, ...)
| Unavailability of key technical and support staff (holidays period, sickness, ...)
| All
| All
| AppDB technical/support staff try not to be on leave at the same time. Even if that happened (typically during August), there is always MOD (manager on duty) and secure remote access is always available.
| Preemptive:
The AppDB technical/support staff try not to be on leave at the same time. Even if that happens (typically during August), there is always an MOD (manager on duty)  
 
Reactive:
Secure remote access and monitoring are always available
| style="background: green"| Low  
| style="background: green"| Low  
| 1 or more working days
| 1 or more working days
Line 92: Line 113:
| Major disruption in the data centre. Fire, flood  or electric failure for example
| Major disruption in the data centre. Fire, flood  or electric failure for example
| All
| All
| All virtual machines and data are backuped in a daily basis. In addition critical components like, backup servers, common storage, network devices etc. are supported by a UPS and power generator.   
| Reactive:
All virtual machines and data are backuped on a daily basis. Backup cluster located in a separate room. In addition critical components, like backup servers, common storage, network devices etc. are supported by a UPS and power generator.   
| style="background: green"| Low  
| style="background: green"| Low  
| 1 or more working days
| 1 or more working days
Line 100: Line 122:
| Major security incident. The system is compromised by external attackers and needs to be reinstalled and restored.
| Major security incident. The system is compromised by external attackers and needs to be reinstalled and restored.
| Frontend and DB
| Frontend and DB
| All virtual machines, data and code are backuped in a daily basis.
| Preemptive:
Security updates are applied regularly.
 
Reactive:
All virtual machines, data, and code are backed-up on a daily basis.
| style="background: green"| Low  
| style="background: green"| Low  
| up to 8 hours (1 working day)
| up to 8 hours (1 working day)
Line 108: Line 134:
| (D)DOS attack. The service is unavailable because of a coordinated DDOS.
| (D)DOS attack. The service is unavailable because of a coordinated DDOS.
| Web front/public APIs
| Web front/public APIs
| Local network team provides monitoring and protection for DOS attacks, firewall can limit impact of the DDoS
| Reactive: Local network team provides monitoring and protection for DOS attacks, firewall can limit impact of the DDoS
| style="background: green"| Low  
| style="background: green"| Low  
| up to 4 hours (half working day)
| up to 4 hours (half working day)
Line 117: Line 143:


The level of all the identified risks is acceptable and the countermeasures already adopted are considered satisfactory
The level of all the identified risks is acceptable and the countermeasures already adopted are considered satisfactory
== Additional information ==
*procedures for countermeasures to be invoked in case of risk occurrence are available to the service provider
*the Availability targets don't change in case the plan is invoked.
*recovery requirements:
**Maximum tolerable period of disruption (MTPoD) (the maximum amount of time that a service can be unavailable or undelivered after an event that causes disruption to operations, before its stakeholders perceive unacceptable consequences): 5 days
**Recovery time objective (RTO) (the acceptable amount of time to restore the service in order to avoid unacceptable consequences associated with a break in continuity (this has to be less than MTPoD)): 2 days
**Recovery point objective (RPO) (the acceptable latency of data that will not be recovered): 5 days
*approach for the return to normal working conditions as reported in the risk assessment.


= Availability and Continuity test =
= Availability and Continuity test =
There were no big changes to this service after the execution of the last recovery test, which is considered still valid, so a new test is not required.
We report hereafter the details of the last test performed.
== Test details and outcome ==


The proposed A/C test will focus on a recovery scenario: the service is supposed to have been disrupted and needs to be reinstalled from scratch.  
The proposed A/C test will focus on a recovery scenario: the service is supposed to have been disrupted and needs to be reinstalled from scratch.  
Line 124: Line 164:


Performing this test will be useful to spot any issue in the recovery procedures of the service.
Performing this test will be useful to spot any issue in the recovery procedures of the service.
== Test details and outcome ==


The recovery test has been done on the development instance:
The recovery test has been done on the development instance:
Line 159: Line 197:
|-
|-
| <br>
| <br>
|  
| Alessandro Paolini
|  
| 2019-11-26
|  
| starting the yearly review....
|-
| <br>
| Alessandro Paolini
| 2020-01-27
| wiki page finalised, review completed
|-
| <br>
| Alessandro Paolini
| 2021-02-17
| starting the yearly review....
|-
|-
| <br>
| <br>

Revision as of 15:37, 17 February 2021

Main EGI.eu operations services Support Documentation Tools Activities Performance Technology Catch-all Services Resource Allocation Security


Documentation menu: Home Manuals Procedures Training Other Contact For: VO managers Administrators


Back to main page: Services Availability Continuity Plans

Introduction

This page reports on the Availability and Continuity Plan for the AppDB and it is the result of the risks assessment conducted for this service: a series of risks and treats has been identified and analysed, along with the correspondent countermeasures currently in place. Whenever a countermeasure is not considered satisfactory for either avoiding or reducing the likelihood of the occurrence of a risk, or its impact, it is agreed with the service provider a new treatment for improving the availability and continuity of the service. The process is concluded with an availability and continuity test.

Last Next
Risks assessment 2020-01-27 2021 January
Av/Co plan and test 2018-11-23 --

Previous plans are collected here: https://documents.egi.eu/secure/ShowDocument?docid=3544

Performances

In the OLA it was agreed the following performances targets, on a monthly basis:

  • Availability: 95%
  • Reliability 95%

Other availability requirements:

  • the service is accessible through EGI Check-in, which in turn also offers access via a x509 certificate by selecting the "IGTF Proxy certificate" option
  • the service is accessible via webUI and offers a RESTful API

The service availability is regularly tested by nagios probe eu.egi.CertValidity and org.nagiosexchange.AppDB-WebCheck: https://argo-mon.egi.eu/nagios/cgi-bin/status.cgi?host=appdb.egi.eu&style=detail

The performances reports in terms of Availability and Reliability are produced by ARGO on an almost real time basis and they are also periodically collected into the Documentation Database.

Over the past years, the AppDB hadn't particular Av/Co issues highlighted by the performances that need to be further investigated.

Risks assessment and management

For more details, please look at the google spreadsheet. We will report here a summary of the assessment.

Risks analysis

 to update 
Risk id Risk description Affected components Established measures Risk level Expected duration of downtime / time for recovery Comment
1 Service unavailable / loss of data due to hardware failure All Reactive: All services are running on virtual machines. In case of hardware failure of the host machine the virtual machine can be re-instantiated in another hypervisor in the private cloud. Daily backups of the service including database data. Low up to 8 hours (1 working day) the measures already in place are considered satisfactory and risk level is acceptable
2 Service unavailable / loss of data due to software failure All Preemptive:

Software changes are tested on a development instance before being rolled-out into production

Reactive: Services' source code under version control in internal repositories and service data backed-up

Low up to 8 hours (1 working day) the measures already in place are considered satisfactory and risk level is acceptable
3 service unavailable / loss of data due to human error All Preemptive:

Staff access rights to service installations are assigned according to seniority

Reactive: Services' source code under version control in internal repositories and service data backed-up

Medium up to 4 hours (half working day) the measures already in place are considered satisfactory and risk level is acceptable
4 service unavailable for network failure (Network outage with causes external of the site) Web front/public APIs Preemptive: The University of Athens, which acts as an ISP to IASA, has redundant network connectivity. Low up to 4 hours (half working day) the measures already in place are considered satisfactory and risk level is acceptable
5 Unavailability of key technical and support staff (holidays period, sickness, ...) All Preemptive:

The AppDB technical/support staff try not to be on leave at the same time. Even if that happens (typically during August), there is always an MOD (manager on duty)

Reactive: Secure remote access and monitoring are always available

Low 1 or more working days the measures already in place are considered satisfactory and risk level is acceptable
6 Major disruption in the data centre. Fire, flood or electric failure for example All Reactive:

All virtual machines and data are backuped on a daily basis. Backup cluster located in a separate room. In addition critical components, like backup servers, common storage, network devices etc. are supported by a UPS and power generator.

Low 1 or more working days the measures already in place are considered satisfactory and risk level is acceptable
7 Major security incident. The system is compromised by external attackers and needs to be reinstalled and restored. Frontend and DB Preemptive:

Security updates are applied regularly.

Reactive: All virtual machines, data, and code are backed-up on a daily basis.

Low up to 8 hours (1 working day) the measures already in place are considered satisfactory and risk level is acceptable
8 (D)DOS attack. The service is unavailable because of a coordinated DDOS. Web front/public APIs Reactive: Local network team provides monitoring and protection for DOS attacks, firewall can limit impact of the DDoS Low up to 4 hours (half working day) the measures already in place are considered satisfactory and risk level is acceptable

Outcome

The level of all the identified risks is acceptable and the countermeasures already adopted are considered satisfactory

Additional information

  • procedures for countermeasures to be invoked in case of risk occurrence are available to the service provider
  • the Availability targets don't change in case the plan is invoked.
  • recovery requirements:
    • Maximum tolerable period of disruption (MTPoD) (the maximum amount of time that a service can be unavailable or undelivered after an event that causes disruption to operations, before its stakeholders perceive unacceptable consequences): 5 days
    • Recovery time objective (RTO) (the acceptable amount of time to restore the service in order to avoid unacceptable consequences associated with a break in continuity (this has to be less than MTPoD)): 2 days
    • Recovery point objective (RPO) (the acceptable latency of data that will not be recovered): 5 days
  • approach for the return to normal working conditions as reported in the risk assessment.

Availability and Continuity test

There were no big changes to this service after the execution of the last recovery test, which is considered still valid, so a new test is not required.

We report hereafter the details of the last test performed.

Test details and outcome

The proposed A/C test will focus on a recovery scenario: the service is supposed to have been disrupted and needs to be reinstalled from scratch. The last backup of the data will be used for restoring the service, verifying how much information will be lost, and the time spent will be measured.

Performing this test will be useful to spot any issue in the recovery procedures of the service.

The recovery test has been done on the development instance:

  • poweroff the AppDB-Dev VM - 1 min
  • retrieve/decompress/register the latest VM image from the backup - 20 mins
  • instantiate the VM - 5 mins
  • retrieve the latest DB data from the backup - 1 mins
  • data recovery completed - 2 mins
  • recovered machine rebooted and services on line 1 min

Total: 30mins

The test can be considered successful: recovering the service took relatively few time, the infrastructure will not suffer serious consequences if the recovery time will be so short in case of a real disruption.

Revision History

Version Authors Date Comments

Alessandro Paolini 2018-10-29 first draft, discussing with the provider

Alessandro Paolini 2018-11-23 test performed, plan finalised

Alessandro Paolini 2019-11-26 starting the yearly review....

Alessandro Paolini 2020-01-27 wiki page finalised, review completed

Alessandro Paolini 2021-02-17 starting the yearly review....