Alert.png The wiki is deprecated and due to be decommissioned by the end of September 2022.
The content is being migrated to other supports, new updates will be ignored and lost.
If needed you can get in touch with EGI SDIS team using operations @ egi.eu.

Difference between revisions of "AppDB Availability and Continuity Plan"

From EGIWiki
Jump to navigation Jump to search
(47 intermediate revisions by 3 users not shown)
Line 3: Line 3:


Back to main page: [[Services Availability Continuity Plans]]
Back to main page: [[Services Availability Continuity Plans]]
'''work in progress'''


= Introduction =
= Introduction =
Line 17: Line 15:
|-
|-
! scope="row"| Risks assessment
! scope="row"| Risks assessment
| 2018-05-03
| 2020-01-27
| 2019 May
| 2021 January
|-
|-
! scope="row"| Av/Co plan and test
! scope="row"| Av/Co plan and test
| 2018-05-28
| 2018-11-23
| 2019 May
| --
|-
|-
|}
|}


= Hardware HA Configuration =
Previous plans are collected here: https://documents.egi.eu/secure/ShowDocument?docid=3544


= Performances =


* The Accounting Portal service is available in a dedicated virtual machine running in the CESGA cloud framework based on OpenNebula software, which offers high availability thanks to its resources:
In the OLA it was agreed the following performances targets, on a monthly basis:
** A pool of physical servers where the virtual machine can run. Over 50 servers with 24 cores and 32GB per server are available. These servers are configured with redundant power supply and two disks in RAID-1 configuration.
*Availability: 95%
** Storage is provided in a NetApp HA storage solution, providing redundant configuration for data movers (servers) and RAID-TEC (triple parity) protection for the disks; the backup of this storage is performed on a daily basis
*Reliability 95%


= Performances =
Other availability requirements:
*the service is accessible through EGI Check-in, which in turn also offers access via a x509 certificate by selecting the "IGTF Proxy certificate" option
*the service is accessible via webUI and offers a RESTful API


The performances reports in terms of Availability and Reliability are produced by [http://argo.egi.eu/ar-opsmon ARGO] on an almost real time basis and they are also collected every 6 months into the Documentation Database (last period [https://documents.egi.eu/document/3270 July 2017 - December 2017].
The service availability is regularly tested by nagios probe ''eu.egi.CertValidity'' and ''org.nagiosexchange.AppDB-WebCheck'': https://argo-mon.egi.eu/nagios/cgi-bin/status.cgi?host=appdb.egi.eu&style=detail


In the OLA it was agreed the following performances targets, on a monthly basis:
The performances reports in terms of Availability and Reliability are produced by [http://egi.ui.argo.grnet.gr/egi/OPS-MONITOR-Critical ARGO] on an almost real time basis and they are also periodically collected into the [https://documents.egi.eu/public/ShowDocument?docid=2324 Documentation Database].
*Availability: 99%
*Reliability 99%


Over the past years, the Accounting Portal hadn't particular Av/Co issues highlighted by the performances that need to be further investigated.
Over the past years, the AppDB hadn't particular Av/Co issues highlighted by the performances that need to be further investigated.


= Risks assessment and management =
= Risks assessment and management =


For more details, please look at the [https://docs.google.com/spreadsheets/d/1rB6JSywfxVZ9GIIHWo1askSRz8oRMC9DzkW9LLVJ898/edit#gid=1681174663 google spreadsheet]. We will report here a summary of the assessment.
For more details, please look at the [https://docs.google.com/spreadsheets/d/1g6vR1vlG9eTIny96J2BBQ_tFBVNWp0YMk0f268G99lA/edit#gid=321732303 google spreadsheet]. We will report here a summary of the assessment.


== Risks analysis ==
== Risks analysis ==
<pre style="color: blue"> to update </pre>


{| class="wikitable"
{| class="wikitable"
Line 61: Line 61:
| Service unavailable / loss of data due to hardware failure
| Service unavailable / loss of data due to hardware failure
| All
| All
| All services are running on virtual machines. In case of hardware failure of the host machine the virtual machine can be re-instantiated in another hypervisor in the private cloud. Daily backups of the service including database data.
| Reactive: All services are running on virtual machines. In case of hardware failure of the host machine the virtual machine can be re-instantiated in another hypervisor in the private cloud. Daily backups of the service including database data.
| style="background: green"| Low  
| style="background: green"| Low  
| In case an instance must be instantiated from backups can take up to two-three working hours.
| up to 8 hours (1 working day)
In case latest backups are not available some data must be re-generate from the central accounting repositories and this may take up to two hours.
| the measures already in place are considered satisfactory and risk level is acceptable
| the measures already in place are considered satisfactory and risk level is acceptable
|-
|-
Line 70: Line 69:
| Service unavailable / loss of data due to software failure
| Service unavailable / loss of data due to software failure
| All
| All
| Restoring of the codebase via git repository
| Preemptive:
Software changes are tested on a development instance before being rolled-out into production
 
Reactive:
Services' source code under version control in internal repositories and service data backed-up
| style="background: green"| Low  
| style="background: green"| Low  
| One to two working hours
| up to 8 hours (1 working day)
| the measures already in place are considered satisfactory and risk level is acceptable
| the measures already in place are considered satisfactory and risk level is acceptable
|-
|-
Line 78: Line 81:
| service unavailable / loss of data due to human error
| service unavailable / loss of data due to human error
| All
| All
| Restoring of the codebase via git repository, restore of backup of virtual machine, restoring of data from SSM services.
| Preemptive:
| style="background: green"| Low
Staff access rights to service installations are assigned according to seniority
| Two to Three working hours.
 
Reactive:
Services' source code under version control in internal repositories and service data backed-up
| style="background: yellow"| Medium
| up to 4 hours (half working day)
| the measures already in place are considered satisfactory and risk level is acceptable
| the measures already in place are considered satisfactory and risk level is acceptable
|-
|-
| 4
| 4
| service unavailable for network failure (Network outage with causes external of the site)
| service unavailable for network failure (Network outage with causes external of the site)
| Web frontend
| Web front/public APIs
| CESGA has redundant network connectivity to the NREN
| Preemptive: The University of Athens, which acts as an ISP to IASA, has redundant network connectivity.
| style="background: green"| Low  
| style="background: green"| Low  
| Close to zero, less than one hour.
| up to 4 hours (half working day)
| the measures already in place are considered satisfactory and risk level is acceptable
| the measures already in place are considered satisfactory and risk level is acceptable
|-
|-
Line 94: Line 101:
| Unavailability of key technical and support staff (holidays period, sickness, ...)
| Unavailability of key technical and support staff (holidays period, sickness, ...)
| All
| All
| More personnel have been involved in the operation of the Accounting portal, this ensures actions taken within the OLA goals every working day. There is also internal documentation with management procedures and portal architecture.
| Preemptive:
The AppDB technical/support staff try not to be on leave at the same time. Even if that happens (typically during August), there is always an MOD (manager on duty)
 
Reactive:
Secure remote access and monitoring are always available
| style="background: green"| Low  
| style="background: green"| Low  
| Within the OLA targets for operational actions
| 1 or more working days
Longer periods in case of bugs or maintenance (one week) because not all the personnel can develop patches to the code.
| the measures already in place are considered satisfactory and risk level is acceptable
| the measures already in place are considered satisfactory and risk level is acceptable
|-
|-
Line 103: Line 113:
| Major disruption in the data centre. Fire, flood  or electric failure for example
| Major disruption in the data centre. Fire, flood  or electric failure for example
| All
| All
| The computing centre has electric backup system and fire control devices. In case of an occurrence despite the controls, the virtual machine can be instantiated elsewhere.  
| Reactive:
All virtual machines and data are backuped on a daily basis. Backup cluster located in a separate room. In addition critical components, like backup servers, common storage, network devices etc. are supported by a UPS and power generator.
| style="background: green"| Low  
| style="background: green"| Low  
| 1-2 weeks, the time to deploy recover operational status at CESGA or the service to another resource centre partner of the NGI
| 1 or more working days
| the measures already in place are considered satisfactory and risk level is acceptable
| the measures already in place are considered satisfactory and risk level is acceptable
|-
|-
Line 111: Line 122:
| Major security incident. The system is compromised by external attackers and needs to be reinstalled and restored.
| Major security incident. The system is compromised by external attackers and needs to be reinstalled and restored.
| Frontend and DB
| Frontend and DB
| Daily backup are executed. Backup is stored in a separate system and can be restored in another VM
| Preemptive:
Security updates are applied regularly.
 
Reactive:
All virtual machines, data, and code are backed-up on a daily basis.
| style="background: green"| Low  
| style="background: green"| Low  
| 1-2 work hours. In case new host certificates are required, up to 2 days.
| up to 8 hours (1 working day)
| the measures already in place are considered satisfactory and risk level is acceptable
| the measures already in place are considered satisfactory and risk level is acceptable
|-
|-
| 8
| 8
| (D)DOS attack. The service is unavailable because of a coordinated DDOS.
| (D)DOS attack. The service is unavailable because of a coordinated DDOS.
| Web interface
| Web front/public APIs
| NREN provides protection for DOS attacks, firewall can limit impact of the DDoS
| Reactive: Local network team provides monitoring and protection for DOS attacks, firewall can limit impact of the DDoS
| style="background: green"| Low  
| style="background: green"| Low  
| Depending on the attack, few hours maximum
| up to 4 hours (half working day)
| the measures already in place are considered satisfactory and risk level is acceptable
| the measures already in place are considered satisfactory and risk level is acceptable
|}
|}
Line 128: Line 143:


The level of all the identified risks is acceptable and the countermeasures already adopted are considered satisfactory
The level of all the identified risks is acceptable and the countermeasures already adopted are considered satisfactory
== Additional information ==
*procedures for countermeasures to be invoked in case of risk occurrence are available to the service provider
*the Availability targets don't change in case the plan is invoked.
*recovery requirements:
**Maximum tolerable period of disruption (MTPoD) (the maximum amount of time that a service can be unavailable or undelivered after an event that causes disruption to operations, before its stakeholders perceive unacceptable consequences): 5 days
**Recovery time objective (RTO) (the acceptable amount of time to restore the service in order to avoid unacceptable consequences associated with a break in continuity (this has to be less than MTPoD)): 2 days
**Recovery point objective (RPO) (the acceptable latency of data that will not be recovered): 5 days
*approach for the return to normal working conditions as reported in the risk assessment.


= Availability and Continuity test =
= Availability and Continuity test =
There were no big changes to this service after the execution of the last recovery test, which is considered still valid, so a new test is not required.
We report hereafter the details of the last test performed.
== Test details and outcome ==


The proposed A/C test will focus on a recovery scenario: the service is supposed to have been disrupted and needs to be reinstalled from scratch. Typically this covers the risks 1,2, and 7.
The proposed A/C test will focus on a recovery scenario: the service is supposed to have been disrupted and needs to be reinstalled from scratch.  
The last backup of the data will be used for restoring the service, verifying how much information will be lost, and the time spent will be measured.
The last backup of the data will be used for restoring the service, verifying how much information will be lost, and the time spent will be measured.


Performing this test will be useful to spot any issue in the recovery procedures of the service.
Performing this test will be useful to spot any issue in the recovery procedures of the service.


== Test details ==
The recovery test has been done on the development instance:
 
*poweroff the AppDB-Dev VM - 1 min
references in the [https://rt.egi.eu/rt/Ticket/Display.html?id=14308 RT ticket 14308]
*retrieve/decompress/register the latest VM image from the backup - 20 mins
 
*instantiate the VM - 5 mins
10:45 CEST -> poweroff of the original machine (accounting-devel-next)
*retrieve the latest DB data from the backup - 1 mins
11:05 CEST -> Power on for the template (minimal) machine for recovery
*data recovery completed - 2 mins
12:00 CEST -> Data recovery completed
*recovered machine rebooted and services on line 1 min
12:10 CEST -> Boot restoration and miscelaneus comprobations.
12:15 CEST -> Recovered machine rebooted and services on line


The recovery took about '''70 minutes'''.
'''Total: 30mins'''


== Test outcome ==
The test can be considered successful: recovering the service took relatively few time, the infrastructure will not suffer serious consequences if the recovery time will be so short in case of a real disruption.
The test can be considered successful: the service can be restored in few time and there are no loss of data.
Even if the service is not available for few hours or a day, this can be considered acceptable: the portal is used for displaying and collecting accounting information, the other infrastructure services are not depending on it. Only infrastructure/operations centres/resource centres/VOs managers would suffer of a temporary disruption of the service.


= Revision History  =
= Revision History  =
Line 163: Line 188:
| <br>
| <br>
| Alessandro Paolini
| Alessandro Paolini
| 2018-05-03
| 2018-10-29
| first draft, discussing with the provider
| first draft, discussing with the provider
|-
|-
|
| <br>
| Ivan Diaz, Alessandro Paolini
| Alessandro Paolini
| 2018-05-28
| 2018-11-23
| added a paragraph about HA configuration; added information about the test
| test performed, plan finalised
|-
| <br>
| Alessandro Paolini
| 2019-11-26
| starting the yearly review....
|-
| <br>
| Alessandro Paolini
| 2020-01-27
| wiki page finalised, review completed
|-
|-
| <br>
| <br>
| Alessandro Paolini
| Alessandro Paolini
| 2018-08-15
| 2021-02-17
| added the paragraph Test Outcome, plan finalised
| starting the yearly review....
|-
|-
|
| <br>
|  
|  
|  
|  
|  
|  
|}
|}

Revision as of 15:37, 17 February 2021

Main EGI.eu operations services Support Documentation Tools Activities Performance Technology Catch-all Services Resource Allocation Security


Documentation menu: Home Manuals Procedures Training Other Contact For: VO managers Administrators


Back to main page: Services Availability Continuity Plans

Introduction

This page reports on the Availability and Continuity Plan for the AppDB and it is the result of the risks assessment conducted for this service: a series of risks and treats has been identified and analysed, along with the correspondent countermeasures currently in place. Whenever a countermeasure is not considered satisfactory for either avoiding or reducing the likelihood of the occurrence of a risk, or its impact, it is agreed with the service provider a new treatment for improving the availability and continuity of the service. The process is concluded with an availability and continuity test.

Last Next
Risks assessment 2020-01-27 2021 January
Av/Co plan and test 2018-11-23 --

Previous plans are collected here: https://documents.egi.eu/secure/ShowDocument?docid=3544

Performances

In the OLA it was agreed the following performances targets, on a monthly basis:

  • Availability: 95%
  • Reliability 95%

Other availability requirements:

  • the service is accessible through EGI Check-in, which in turn also offers access via a x509 certificate by selecting the "IGTF Proxy certificate" option
  • the service is accessible via webUI and offers a RESTful API

The service availability is regularly tested by nagios probe eu.egi.CertValidity and org.nagiosexchange.AppDB-WebCheck: https://argo-mon.egi.eu/nagios/cgi-bin/status.cgi?host=appdb.egi.eu&style=detail

The performances reports in terms of Availability and Reliability are produced by ARGO on an almost real time basis and they are also periodically collected into the Documentation Database.

Over the past years, the AppDB hadn't particular Av/Co issues highlighted by the performances that need to be further investigated.

Risks assessment and management

For more details, please look at the google spreadsheet. We will report here a summary of the assessment.

Risks analysis

 to update 
Risk id Risk description Affected components Established measures Risk level Expected duration of downtime / time for recovery Comment
1 Service unavailable / loss of data due to hardware failure All Reactive: All services are running on virtual machines. In case of hardware failure of the host machine the virtual machine can be re-instantiated in another hypervisor in the private cloud. Daily backups of the service including database data. Low up to 8 hours (1 working day) the measures already in place are considered satisfactory and risk level is acceptable
2 Service unavailable / loss of data due to software failure All Preemptive:

Software changes are tested on a development instance before being rolled-out into production

Reactive: Services' source code under version control in internal repositories and service data backed-up

Low up to 8 hours (1 working day) the measures already in place are considered satisfactory and risk level is acceptable
3 service unavailable / loss of data due to human error All Preemptive:

Staff access rights to service installations are assigned according to seniority

Reactive: Services' source code under version control in internal repositories and service data backed-up

Medium up to 4 hours (half working day) the measures already in place are considered satisfactory and risk level is acceptable
4 service unavailable for network failure (Network outage with causes external of the site) Web front/public APIs Preemptive: The University of Athens, which acts as an ISP to IASA, has redundant network connectivity. Low up to 4 hours (half working day) the measures already in place are considered satisfactory and risk level is acceptable
5 Unavailability of key technical and support staff (holidays period, sickness, ...) All Preemptive:

The AppDB technical/support staff try not to be on leave at the same time. Even if that happens (typically during August), there is always an MOD (manager on duty)

Reactive: Secure remote access and monitoring are always available

Low 1 or more working days the measures already in place are considered satisfactory and risk level is acceptable
6 Major disruption in the data centre. Fire, flood or electric failure for example All Reactive:

All virtual machines and data are backuped on a daily basis. Backup cluster located in a separate room. In addition critical components, like backup servers, common storage, network devices etc. are supported by a UPS and power generator.

Low 1 or more working days the measures already in place are considered satisfactory and risk level is acceptable
7 Major security incident. The system is compromised by external attackers and needs to be reinstalled and restored. Frontend and DB Preemptive:

Security updates are applied regularly.

Reactive: All virtual machines, data, and code are backed-up on a daily basis.

Low up to 8 hours (1 working day) the measures already in place are considered satisfactory and risk level is acceptable
8 (D)DOS attack. The service is unavailable because of a coordinated DDOS. Web front/public APIs Reactive: Local network team provides monitoring and protection for DOS attacks, firewall can limit impact of the DDoS Low up to 4 hours (half working day) the measures already in place are considered satisfactory and risk level is acceptable

Outcome

The level of all the identified risks is acceptable and the countermeasures already adopted are considered satisfactory

Additional information

  • procedures for countermeasures to be invoked in case of risk occurrence are available to the service provider
  • the Availability targets don't change in case the plan is invoked.
  • recovery requirements:
    • Maximum tolerable period of disruption (MTPoD) (the maximum amount of time that a service can be unavailable or undelivered after an event that causes disruption to operations, before its stakeholders perceive unacceptable consequences): 5 days
    • Recovery time objective (RTO) (the acceptable amount of time to restore the service in order to avoid unacceptable consequences associated with a break in continuity (this has to be less than MTPoD)): 2 days
    • Recovery point objective (RPO) (the acceptable latency of data that will not be recovered): 5 days
  • approach for the return to normal working conditions as reported in the risk assessment.

Availability and Continuity test

There were no big changes to this service after the execution of the last recovery test, which is considered still valid, so a new test is not required.

We report hereafter the details of the last test performed.

Test details and outcome

The proposed A/C test will focus on a recovery scenario: the service is supposed to have been disrupted and needs to be reinstalled from scratch. The last backup of the data will be used for restoring the service, verifying how much information will be lost, and the time spent will be measured.

Performing this test will be useful to spot any issue in the recovery procedures of the service.

The recovery test has been done on the development instance:

  • poweroff the AppDB-Dev VM - 1 min
  • retrieve/decompress/register the latest VM image from the backup - 20 mins
  • instantiate the VM - 5 mins
  • retrieve the latest DB data from the backup - 1 mins
  • data recovery completed - 2 mins
  • recovered machine rebooted and services on line 1 min

Total: 30mins

The test can be considered successful: recovering the service took relatively few time, the infrastructure will not suffer serious consequences if the recovery time will be so short in case of a real disruption.

Revision History

Version Authors Date Comments

Alessandro Paolini 2018-10-29 first draft, discussing with the provider

Alessandro Paolini 2018-11-23 test performed, plan finalised

Alessandro Paolini 2019-11-26 starting the yearly review....

Alessandro Paolini 2020-01-27 wiki page finalised, review completed

Alessandro Paolini 2021-02-17 starting the yearly review....