Alert.png The wiki is deprecated and due to be decommissioned by the end of September 2022.
The content is being migrated to other supports, new updates will be ignored and lost.
If needed you can get in touch with EGI SDIS team using operations @ egi.eu.

Difference between revisions of "EOSC Portal Availability and Continuity Plan"

From EGIWiki
Jump to navigation Jump to search
 
(20 intermediate revisions by the same user not shown)
Line 8: Line 8:
= Introduction =
= Introduction =


This page reports on the Availability and Continuity Plan for the EOSC Marketplace and it is the result of the risks assessment conducted for this service: a series of risks and treats has been identified and analysed, along with the correspondent countermeasures currently in place. Whenever a countermeasure is not considered satisfactory for either avoiding or reducing the likelihood of the occurrence of a risk, or its impact, it is agreed with the service provider a new treatment for improving the availability and continuity of the service. The process is concluded with an availability and continuity test.
This page reports on the Availability and Continuity Plan for the EOSC Portal and it is the result of the risks assessment conducted for this service: a series of risks and treats has been identified and analysed, along with the correspondent countermeasures currently in place. Whenever a countermeasure is not considered satisfactory for either avoiding or reducing the likelihood of the occurrence of a risk, or its impact, it is agreed with the service provider a new treatment for improving the availability and continuity of the service. The process is concluded with an availability and continuity test.


{| class="wikitable"
{| class="wikitable"
Line 17: Line 17:
|-
|-
! scope="row"| Risks assessment
! scope="row"| Risks assessment
|  
| 2020 Jul
|  
| Q3 2021
|-
|-
! scope="row"| Av/Co plan and test
! scope="row"| Av/Co plan and test
|  
| 2020 Nov
|  
| Q3 2021
|-
|-
|}
|}
Line 28: Line 28:
= Availability requirements and performances =
= Availability requirements and performances =


In the OLA it was agreed the following performances targets, on a monthly basis:
Service level targets:
*Availability: 90%
* Availability: 90%
*Reliability 95%
* Reliability 95%
 
Other availability requirements:
*the service is accessible through X509 certificate and Check-in
*the service is accessible via webUI
 
The service availability is regularly tested by nagios probe org.nagiosexchange.Portal-WebCheck: https://argo-mon.egi.eu/nagios/cgi-bin/status.cgi?servicegroup=SITE_EGI-MARKETPLACE_egi.Portal&style=overview


The performances reports in terms of Availability and Reliability are produced by [http://egi.ui.argo.grnet.gr/egi/OPS-MONITOR-Critical ARGO] on an almost real time basis and they are also periodically collected into the [https://documents.egi.eu/public/ShowDocument?docid=2324 Documentation Database].
Other ava requirements:
* The service is accessible via webUI
* The service availability is regularly tested by nagios probe org.nagiosexchange.Portal-WebCheck (currently on the uncert ARGO instance): https://argo-mon-uncert.cro-ngi.hr/nagios/cgi-bin/status.cgi?host=eosc-portal.eu&style=detail


= Risks assessment and management =
= Risks assessment and management =
Line 57: Line 53:
| 1
| 1
| Service unavailable / loss of data due to hardware failure
| Service unavailable / loss of data due to hardware failure
|  
| EOSC Portal user GUI, EOSC Portal database
|  
| All services are running on virtual machines. In case of hardware failure of the host machine the virtual machine can be re-instantiated.
|  
| style="background: red"| High
|  
| up to 8 hours
|  
| style="background: red"| It is necessary to verify the implementation status of the hardware HA configuration.
|-
|-
| 2
| 2
| Service unavailable / loss of data due to CEPH file system in the cloud infrastructure
| EOSC Portal user GUI, EOSC Portal database
| All services databases are backed up on a tape storage. Service VM will be restored with the database back up from the tape storage.
| style="background: yellow"| Medium
| 1 or more working days
|
|-
| 3
| Service unavailable / loss of data due to software failure
| Service unavailable / loss of data due to software failure
|  
| EOSC Portal user GUI, EOSC Portal database
|  
| Monitoring of system health, backups
|  
| style="background: yellow"| Medium
|  
| up to 4 hours
|
|
|-
|-
| 3
| 4
| service unavailable / loss of data due to human error
| service unavailable / loss of data due to human error
|  
| EOSC POrtal user GUI
|  
| Monitoring of system health, backups
|  
| style="background: yellow"| Medium
|  
| less than 1 hour
|
|
|-
|-
| 4
| 5
| service unavailable for network failure (Network outage with causes external of the site)
| service unavailable for network failure (Network outage with causes external of the site)
|  
|
|  
| Monitoring of service availability, ACC Cyfronet AGH has redundant network connectivity.
|  
| style="background: green"| Low
|  
| up to 4 hours
|
|
|-
|-
| 5
| 6
| Unavailability of key technical and support staff (holidays period, sickness, ...)
| Unavailability of key technical and support staff (holidays period, sickness, ...)
|  
| EOSC Portal user GUI, EOSC Portal database
|  
| More personnel may be involved in the operation of the EOSC Portal
|  
| style="background: green"| Low
|  
| 1 or more working days
|
|
|-
|-
| 6
| 7
| Major disruption in the data centre. Fire, flood  or electric failure for example
| Major disruption in the data centre. Fire, flood  or electric failure for example
|  
| EOSC Portal user GUI, EOSC Portal database
|  
| The computing centre has electric backup system and fire control devices.
|  
| style="background: yellow"| Medium
|  
| 1 or more working days
|
|
|-
|-
| 7
| 8
| Major security incident. The system is compromised by external attackers and needs to be reinstalled and restored.
| Major security incident. The system is compromised by external attackers and needs to be reinstalled and restored.
|  
| EOSC POrtal user GUI
|  
| Monitoring of system health, security audits, backups, following best practices for security configuration and timely implementation of patches
|  
| style="background: green"| Low
|  
| up to 8 hours
|
|
|-
|-
| 8
| 9
| (D)DOS attack. The service is unavailable because of a coordinated DDOS.
| (D)DOS attack. The service is unavailable because of a coordinated DDOS.
|  
|  
|  
| Local network team provides monitoring and protection for DOS attacks, firewall can limit impact of the DDoS
|  
| style="background: yellow"| Medium
|  
| up to 8 hours
|
|
|}
|}


== Outcome ==
== Outcome ==
<pre style="color: blue">
Currently there is no hardware HA configuration. It is important agreeing with the provider a plan for implementing it in order to decrease the rating of risk number 1.
procedures for the several countermeasures to invoke in case of risk occurrence (put a link if public)
 
Specify if the Availability targets are different in case the plan is invoked.
 
Specify the recovery requirements (in general they are the outcomes of the BIA).
 
Specify if there is a particular approach for the return to normal working conditions if different from what already reported in the risk assessment.
 
The support unit ... shall be used to report any incident or service request
 
The providers can contact EGI Operations via ticket or email in case the continuity plan is invoked, or to discuss any change to it.
</pre>


== Additional information ==
== Additional information ==
Line 144: Line 136:
*Maximum tolerable period of disruption (MTPoD) (the maximum amount of time that a service can be unavailable or undelivered after an event that causes disruption to operations, before its stakeholders perceive unacceptable consequences): 1 day
*Maximum tolerable period of disruption (MTPoD) (the maximum amount of time that a service can be unavailable or undelivered after an event that causes disruption to operations, before its stakeholders perceive unacceptable consequences): 1 day
*Recovery time objective (RTO) (the acceptable amount of time to restore the service in order to avoid unacceptable consequences associated with a break in continuity (this has to be less than MTPoD)): 4 hours
*Recovery time objective (RTO) (the acceptable amount of time to restore the service in order to avoid unacceptable consequences associated with a break in continuity (this has to be less than MTPoD)): 4 hours
*Recovery point objective (RPO) (the acceptable latency of data that will not be recovered): 1 day
*Recovery point objective (RPO) (the acceptable latency of data that will not be recovered): n.a.


Approach for the return to normal working conditions as reported in the risk assessment.
Approach for the return to normal working conditions as reported in the risk assessment.


= Availability and Continuity test =
= Availability and Continuity test =
 
The proposed A/C test will focus on a recovery scenario: the service is supposed to have been disrupted and needs to be reinstalled from scratch. Typically this covers the risks 1,3, and 8. The last backup of the data will be used for restoring the service, verifying how much information will be lost, and the time spent will be measured.
The proposed A/C test will focus on a recovery scenario: the service is supposed to have been disrupted and needs to be reinstalled from scratch.  
The last backup of the data will be used for restoring the service, verifying how much information will be lost, and the time spent will be measured.


Performing this test will be useful to spot any issue in the recovery procedures of the service.
Performing this test will be useful to spot any issue in the recovery procedures of the service.


== Test details and outcome ==
== Test details and outcome ==
The overall time needed to recover the service: 4h


1. Deploy new virtual machines in OpenStack cloud using automatic ansible scripts (haproxy, rails environments, databases: Postgres, elastic search, Redis)
The [https://docs.google.com/document/d/1XFtsShYbvZd4CpFYNuJe-KSXXMdlUMx15GCYhiOZ654/edit?pli=1 proposed test] focused on a recovery scenario:
2. Restore databases and configurations from backups using scripts.
# Restore virtual machines from backups (tapes) (haproxy,apache httpd, mysql)
3. Run setup deployment script (clone marketplace source code from GitHub https://github.com/cyfronet-fid/marketplace, build and run puma and sidekick services)
# Restore databases data using the latest dump.
4. Setup haproxy to proxy services form new machines.
# Setup haproxy to proxy services from new machines.
5. Associate floating IP addresses in OpenStack to haproxy service and open firewall.
# Associate floating IP addresses in OpenStack to haproxy service and open firewall.
6. Test instance https://marketplace.eosc-portal.eu
# Test instance https:/eosc-portal.eu
 
The duration of the test was 1 hour, which is an acceptable time. In case any problem occurs during the recovery process, the duration is expected to increase till 4 hours.


= Revision History  =
= Revision History  =
Line 175: Line 164:
! Comments
! Comments
|-
|-
| <br>
|
| Alessandro Paolini
| 2019-07-24
| first draft, discussing with the provider
|-
| <br>
| Alessandro Paolini
| Alessandro Paolini
| 2019-08-22
| 2020-07-02
| the provider confirmed there is no hardware HA configuration; added new content about Ava requirements (to be refined) and additional information in risk assessment section (to be refined); the provider is going to plan a recovery test.
| risk assessment completed
|-
|-
| <br>
|
| Alessandro Paolini
| Alessandro Paolini
| 2019-11-28
| 2020-11-10
| plan finalised
| plan completed
|-
|-
| <br>
| <br>

Latest revision as of 10:39, 10 November 2020

Main EGI.eu operations services Support Documentation Tools Activities Performance Technology Catch-all Services Resource Allocation Security


Documentation menu: Home Manuals Procedures Training Other Contact For: VO managers Administrators


Back to main page: Services Availability Continuity Plans

work in progress

Introduction

This page reports on the Availability and Continuity Plan for the EOSC Portal and it is the result of the risks assessment conducted for this service: a series of risks and treats has been identified and analysed, along with the correspondent countermeasures currently in place. Whenever a countermeasure is not considered satisfactory for either avoiding or reducing the likelihood of the occurrence of a risk, or its impact, it is agreed with the service provider a new treatment for improving the availability and continuity of the service. The process is concluded with an availability and continuity test.

Last Next
Risks assessment 2020 Jul Q3 2021
Av/Co plan and test 2020 Nov Q3 2021

Availability requirements and performances

Service level targets:

  • Availability: 90%
  • Reliability 95%

Other ava requirements:

Risks assessment and management

For more details, please look at the google spreadsheet. We will report here a summary of the assessment.

Risks analysis

Risk id Risk description Affected components Established measures Risk level Expected duration of downtime / time for recovery Comment
1 Service unavailable / loss of data due to hardware failure EOSC Portal user GUI, EOSC Portal database All services are running on virtual machines. In case of hardware failure of the host machine the virtual machine can be re-instantiated. High up to 8 hours It is necessary to verify the implementation status of the hardware HA configuration.
2 Service unavailable / loss of data due to CEPH file system in the cloud infrastructure EOSC Portal user GUI, EOSC Portal database All services databases are backed up on a tape storage. Service VM will be restored with the database back up from the tape storage. Medium 1 or more working days
3 Service unavailable / loss of data due to software failure EOSC Portal user GUI, EOSC Portal database Monitoring of system health, backups Medium up to 4 hours
4 service unavailable / loss of data due to human error EOSC POrtal user GUI Monitoring of system health, backups Medium less than 1 hour
5 service unavailable for network failure (Network outage with causes external of the site) Monitoring of service availability, ACC Cyfronet AGH has redundant network connectivity. Low up to 4 hours
6 Unavailability of key technical and support staff (holidays period, sickness, ...) EOSC Portal user GUI, EOSC Portal database More personnel may be involved in the operation of the EOSC Portal Low 1 or more working days
7 Major disruption in the data centre. Fire, flood or electric failure for example EOSC Portal user GUI, EOSC Portal database The computing centre has electric backup system and fire control devices. Medium 1 or more working days
8 Major security incident. The system is compromised by external attackers and needs to be reinstalled and restored. EOSC POrtal user GUI Monitoring of system health, security audits, backups, following best practices for security configuration and timely implementation of patches Low up to 8 hours
9 (D)DOS attack. The service is unavailable because of a coordinated DDOS. Local network team provides monitoring and protection for DOS attacks, firewall can limit impact of the DDoS Medium up to 8 hours

Outcome

Currently there is no hardware HA configuration. It is important agreeing with the provider a plan for implementing it in order to decrease the rating of risk number 1.

Additional information

The provider has got internal procedures to invoke in case of risk occurrence

The Availability targets don't change in case the plan is invoked.

Recovery requirements:

  • Maximum tolerable period of disruption (MTPoD) (the maximum amount of time that a service can be unavailable or undelivered after an event that causes disruption to operations, before its stakeholders perceive unacceptable consequences): 1 day
  • Recovery time objective (RTO) (the acceptable amount of time to restore the service in order to avoid unacceptable consequences associated with a break in continuity (this has to be less than MTPoD)): 4 hours
  • Recovery point objective (RPO) (the acceptable latency of data that will not be recovered): n.a.

Approach for the return to normal working conditions as reported in the risk assessment.

Availability and Continuity test

The proposed A/C test will focus on a recovery scenario: the service is supposed to have been disrupted and needs to be reinstalled from scratch. Typically this covers the risks 1,3, and 8. The last backup of the data will be used for restoring the service, verifying how much information will be lost, and the time spent will be measured.

Performing this test will be useful to spot any issue in the recovery procedures of the service.

Test details and outcome

The overall time needed to recover the service: 4h

The proposed test focused on a recovery scenario:

  1. Restore virtual machines from backups (tapes) (haproxy,apache httpd, mysql)
  2. Restore databases data using the latest dump.
  3. Setup haproxy to proxy services from new machines.
  4. Associate floating IP addresses in OpenStack to haproxy service and open firewall.
  5. Test instance https:/eosc-portal.eu

Revision History

Version Authors Date Comments
Alessandro Paolini 2020-07-02 risk assessment completed
Alessandro Paolini 2020-11-10 plan completed