Difference between revisions of "ARGOMS Availability and Continuity Plan"

From EGIWiki
Jump to: navigation, search
(Risks analysis)
(Revision History)
 
(44 intermediate revisions by 4 users not shown)
Line 14: Line 14:
 
|-
 
|-
 
! scope="row"| Risk assessment
 
! scope="row"| Risk assessment
| 2019-02-25
+
| 2020-11-16
| -
+
| Nov 2021
 
|-
 
|-
 
! scope="row"| Av/Co plan and test
 
! scope="row"| Av/Co plan and test
| -
+
| 2020-11-18
| -
+
| Nov 2021
 
|-
 
|-
 
|}
 
|}
 +
 +
Previous plans are collected here: https://documents.egi.eu/document/3650
  
 
= Performances =
 
= Performances =
  
The performances reports in terms of Availability and Reliability are produced by [http://argo.egi.eu/ar-opsmon ARGO] on an almost real time basis and they are also periodically collected into the [https://documents.egi.eu/public/ShowDocument?docid=2324 Documentation Database].
 
 
In the OLA it was agreed the following performances targets, on a monthly basis:
 
In the OLA it was agreed the following performances targets, on a monthly basis:
 
*Availability: 98%
 
*Availability: 98%
 
*Reliability 98%
 
*Reliability 98%
 +
 +
'''Other ava requirements:'''
 +
 +
* The service is accessible through AMS Tokens
 +
* The service is accessible via Argo-AuthN which is an Authentication Service. It creates a mapping mechanism from an alternative authentication mechanism to AMS tokens (ex X509 to AMS Tokens)
 +
* The service is accessible via API and the  argo-ams-library. 
 +
 +
The performances reports in terms of Availability and Reliability are produced by [http://egi.ui.argo.grnet.gr/egi/OPS-MONITOR-Critical ARGO] on an almost real time basis and they are also periodically collected into the [https://documents.egi.eu/public/ShowDocument?docid=2324 Documentation Database]:
 +
* [https://argo.egi.eu/egi/report-ar-group-details/OPS-MONITOR-Critical/SITES/GRIDOPS-MSG/details GRIDOPS-MSG A&R]
  
 
= Risks assessment and management =
 
= Risks assessment and management =
  
For more details, please look at the [https://docs.google.com/spreadsheets/d/1kbHdFEYWXl5jxzQ7o_1O0qKJvvY4isVmjaxQ7TK2WkI/edit#gid=1380063766 google spreadsheet]. We will report here a summary of the assessment.
+
For more details, please look at the [https://docs.google.com/spreadsheets/d/1lVt3uvpJ7kguSQbnxZ8SD3Sljtn2DHO8N_jwxJzg1rY/edit#gid=894860560 google spreadsheet]. We will report here a summary of the assessment.
  
 
== Risks analysis ==
 
== Risks analysis ==
 
 
{| class="wikitable"
 
{| class="wikitable"
 
! Risk id
 
! Risk id
Line 50: Line 59:
 
| Automated Deployment and Backup Procedures that allow for rapid redeployment of affected components
 
| Automated Deployment and Backup Procedures that allow for rapid redeployment of affected components
 
| style="background: green"| Low  
 
| style="background: green"| Low  
| Restore service from the backups on the new hosts which are not affected by the failure.
+
| Redirect DNS to point to one the 3 instances running behind the HA/LB Service while we reinstall/migrate the Service in a new VM and update DNS
| Almost Zero
+
| up to 8 hours
 
|-
 
|-
 
| 2
 
| 2
Line 57: Line 66:
 
| Argo Messaging Service
 
| Argo Messaging Service
 
| Automated Deployment and Backup Procedures that allow for rapid redeployment of affected components
 
| Automated Deployment and Backup Procedures that allow for rapid redeployment of affected components
| style="background: yellow"| Medium
+
| style="background: green"| Low
| Restore service from the backups, log data analysis in order to find a reason of the software failure.
+
| Redirect DNS to point to one the 3 instances running behind the HA/LB Service while we reinstall/migrate the Service in a new VM and update DNS
| Zero for the case of partial failure. 3-4 work hours if a new virtual machine need to be spawned.
+
| up to 8 hours  
 
|-
 
|-
 
| 3
 
| 3
Line 65: Line 74:
 
| Argo Messaging Service
 
| Argo Messaging Service
 
| Automated Deployment and Backup Procedures that allow for rapid redeployment of affected components
 
| Automated Deployment and Backup Procedures that allow for rapid redeployment of affected components
| style="background: yellow"| Medium
+
| style="background: green"| Low
| Restore service/data from backups. Revise actions done which led to the failure and modification of the processes in order to not allow the same error again.
+
| Redirect DNS to point to one the 3 instances running behind the HA/LB Service while we reinstall/migrate the Service in a new VM and update DNS
 
| Up to 8 hours  
 
| Up to 8 hours  
 
|-
 
|-
Line 74: Line 83:
 
| Automated Deployment and Backup Procedures that allow for rapid redeployment of affected components
 
| Automated Deployment and Backup Procedures that allow for rapid redeployment of affected components
 
| style="background: green"| Low  
 
| style="background: green"| Low  
| Standard mitigation and recovery procedures made by network operators.
+
| Redirect DNS to point to one the 3 instances running behind the HA/LB Service while we reinstall/migrate the Service in a new VM and update DNS
| Almost Zero. 3-4 working hours in case of maintenance
+
| up to 8 hours
 
|-
 
|-
 
| 5
 
| 5
 
| Unavailability of key technical and support staff (holidays period, sickness, ...)
 
| Unavailability of key technical and support staff (holidays period, sickness, ...)
 
| Argo Messaging Service
 
| Argo Messaging Service
| There is always one member of the team on call.
+
| There is always one member of the team on call.  
 
| style="background: green"| Low  
 
| style="background: green"| Low  
|  
+
| Redirect DNS to point to one the 3 instances running behind the HA/LB Service while we reinstall/migrate the Service in a new VM and update DNS
| 1 or more working days
+
| up to 8 hours
 
|-
 
|-
 
| 6
 
| 6
Line 89: Line 98:
 
| Argo Messaging Service
 
| Argo Messaging Service
 
| Automated Deployment and Backup Procedures that allow for rapid redeployment of affected components
 
| Automated Deployment and Backup Procedures that allow for rapid redeployment of affected components
| style="background: green"| Low
+
| style="background: yellow"| Medium
| Restore service from the backups on the new hosts which are not affected by the failure.
+
| Redirect DNS to point to one the 3 instances running behind the HA/LB Service while we reinstall/migrate the Service in a new VM and update DNS
 
| 1 or more working days
 
| 1 or more working days
 
|-
 
|-
Line 98: Line 107:
 
| Automated Deployment and Backup Procedures that allow for rapid redeployment of affected components
 
| Automated Deployment and Backup Procedures that allow for rapid redeployment of affected components
 
| style="background: green"| Low  
 
| style="background: green"| Low  
| Restore service from backups on a new hosts.
+
| Redirect DNS to point to one the 3 instances running behind the HA/LB Service while we reinstall/migrate the Service in a new VM and update DNS
| 3-4 work hours. In case new host certificates are required, up to 1 day.
+
| 1 or more working days
 
|-
 
|-
 
| 8
 
| 8
Line 106: Line 115:
 
| Automated Deployment and Backup Procedures that allow for rapid redeployment of affected components
 
| Automated Deployment and Backup Procedures that allow for rapid redeployment of affected components
 
| style="background: green"| Low   
 
| style="background: green"| Low   
| Utilization of DDOS protection on the network level provided by GRNET.
+
| Redirect DNS to point to one the 3 instances running behind the HA/LB Service while we reinstall/migrate the Service in a new VM and update DNS
| Depending on the attack, few hours maximum
+
| up to 8 hours
 
|}
 
|}
  
 
== Outcome ==
 
== Outcome ==
 +
The level of all the identified risks is acceptable and the countermeasures already adopted are considered satisfactory.
 +
 +
== Additional information ==
 +
* The suppliers have ansible scripts for the automated deployment which allow to:
 +
** install a new instance when needed
 +
** updating an instance when needed
 +
* The Availability and Reliability targets don't change in case the plan is invoked
 +
*The support unit '''Messaging''' shall be used to report any incident or service request
 +
*The providers can contact EGI Operations via ticket or email in case the continuity plan is invoked, or to discuss any change to it.
  
The level of all the identified risks is acceptable and the countermeasures already adopted are considered satisfactory
+
Recovery requirements:
 +
*Maximum tolerable period of disruption (MTPoD) (the maximum amount of time that a service can be unavailable or undelivered after an event that causes disruption to operations, before its stakeholders perceive unacceptable consequences): 2 days
 +
*Recovery time objective (RTO) (the acceptable amount of time to restore the service in order to avoid unacceptable consequences associated with a break in continuity (this has to be less than MTPoD)): 2 days
 +
*Recovery point objective (RPO) (the acceptable latency of data that will not be recovered): not applicable
  
 
= Availability and Continuity test =
 
= Availability and Continuity test =
Line 120: Line 141:
 
Performing this test will be useful to spot any issue in the recovery procedures of the service.
 
Performing this test will be useful to spot any issue in the recovery procedures of the service.
  
'''to do'''
+
*Start of recovery scenario test:
 +
Tue Feb 19 09:09:02 CET 2019 One of the 3 VMs running the AMS service instance has been disrupted and needs to be reinstalled from scratch.
 +
*End of test recovery scenario:
 +
Tue Feb 19 10:10:01 CET 2019: We assumed a scenario where the second AMS node faced a hardware failure and a new VM had to be spawned in order to restore it
 +
and able to provide the AMS in full high availability/load balancing mode. The process of spawning the new VM started at 09:25 CET (we allocated approximately
 +
15 minutes for simulating the analysis since the detection of the VM failure at 09:09 CET).
 +
 
 +
'''It should be noted that during the ~1 hours that the ams node was down the existing user sessions were not affected and new user sessions could be established without any noticeable delay. The test showed that in case of failure of one of the ams node, there is no impact on the users'''.
 +
 
 +
Here [https://ggus.eu/index.php?mode=download&attid=ATT111216 more details] about the test.
  
 
= Revision History  =
 
= Revision History  =
Line 135: Line 165:
 
| 2019-02-25
 
| 2019-02-25
 
| first draft
 
| first draft
 +
|-
 +
| <br>
 +
| Alessandro Paolini
 +
| 2019-02-26
 +
| added the information about the AC test; plan finalised
 +
|-
 +
| <br>
 +
| Alessandro Paolini
 +
| 2020-11-18
 +
| yearly review completed: no changes in the risk assessment, no need to a new continuity/recovery test, updated the sections "Performance" and "Additional information"
 
|-
 
|-
 
|
 
|

Latest revision as of 17:57, 18 November 2020

Main EGI.eu operations services Support Documentation Tools Activities Performance Technology Catch-all Services Resource Allocation Security


Documentation menu: Home Manuals Procedures Training Other Contact For: VO managers Administrators


Back to main page: Services Availability Continuity Plans

Introduction

This page reports on the Availability and Continuity Plan for ARGO Messaging Service and it is the result of the risk assessment conducted for this service: a series of risks and threats has been identified and analysed, along with the correspondent countermeasures currently in place. Whenever a countermeasure is not considered satisfactory for either avoiding or reducing the likelihood of the occurrence of a risk, or its impact, it is agreed with the service provider a new treatment for improving the availability and continuity of the service. The process is concluded with an availability and continuity test.

Last Next
Risk assessment 2020-11-16 Nov 2021
Av/Co plan and test 2020-11-18 Nov 2021

Previous plans are collected here: https://documents.egi.eu/document/3650

Performances

In the OLA it was agreed the following performances targets, on a monthly basis:

  • Availability: 98%
  • Reliability 98%

Other ava requirements:

  • The service is accessible through AMS Tokens
  • The service is accessible via Argo-AuthN which is an Authentication Service. It creates a mapping mechanism from an alternative authentication mechanism to AMS tokens (ex X509 to AMS Tokens)
  • The service is accessible via API and the argo-ams-library.

The performances reports in terms of Availability and Reliability are produced by ARGO on an almost real time basis and they are also periodically collected into the Documentation Database:

Risks assessment and management

For more details, please look at the google spreadsheet. We will report here a summary of the assessment.

Risks analysis

Risk id Risk description Affected components Established measures Risk level Treatment Expected duration of downtime / time for recovery
1 Service unavailable / loss of data due to hardware failure Argo Messaging Service Automated Deployment and Backup Procedures that allow for rapid redeployment of affected components Low Redirect DNS to point to one the 3 instances running behind the HA/LB Service while we reinstall/migrate the Service in a new VM and update DNS up to 8 hours
2 Service unavailable / loss of data due to software failure Argo Messaging Service Automated Deployment and Backup Procedures that allow for rapid redeployment of affected components Low Redirect DNS to point to one the 3 instances running behind the HA/LB Service while we reinstall/migrate the Service in a new VM and update DNS up to 8 hours
3 service unavailable / loss of data due to human error Argo Messaging Service Automated Deployment and Backup Procedures that allow for rapid redeployment of affected components Low Redirect DNS to point to one the 3 instances running behind the HA/LB Service while we reinstall/migrate the Service in a new VM and update DNS Up to 8 hours
4 service unavailable for network failure (Network outage with causes external of the site) Argo Messaging Service Automated Deployment and Backup Procedures that allow for rapid redeployment of affected components Low Redirect DNS to point to one the 3 instances running behind the HA/LB Service while we reinstall/migrate the Service in a new VM and update DNS up to 8 hours
5 Unavailability of key technical and support staff (holidays period, sickness, ...) Argo Messaging Service There is always one member of the team on call. Low Redirect DNS to point to one the 3 instances running behind the HA/LB Service while we reinstall/migrate the Service in a new VM and update DNS up to 8 hours
6 Major disruption in the data centre. Fire, flood or electric failure for example Argo Messaging Service Automated Deployment and Backup Procedures that allow for rapid redeployment of affected components Medium Redirect DNS to point to one the 3 instances running behind the HA/LB Service while we reinstall/migrate the Service in a new VM and update DNS 1 or more working days
7 Major security incident. The system is compromised by external attackers and needs to be reinstalled and restored. Argo Messaging Service Automated Deployment and Backup Procedures that allow for rapid redeployment of affected components Low Redirect DNS to point to one the 3 instances running behind the HA/LB Service while we reinstall/migrate the Service in a new VM and update DNS 1 or more working days
8 (D)DOS attack. The service is unavailable because of a coordinated DDOS. Argo Messaging Service Automated Deployment and Backup Procedures that allow for rapid redeployment of affected components Low Redirect DNS to point to one the 3 instances running behind the HA/LB Service while we reinstall/migrate the Service in a new VM and update DNS up to 8 hours

Outcome

The level of all the identified risks is acceptable and the countermeasures already adopted are considered satisfactory.

Additional information

  • The suppliers have ansible scripts for the automated deployment which allow to:
    • install a new instance when needed
    • updating an instance when needed
  • The Availability and Reliability targets don't change in case the plan is invoked
  • The support unit Messaging shall be used to report any incident or service request
  • The providers can contact EGI Operations via ticket or email in case the continuity plan is invoked, or to discuss any change to it.

Recovery requirements:

  • Maximum tolerable period of disruption (MTPoD) (the maximum amount of time that a service can be unavailable or undelivered after an event that causes disruption to operations, before its stakeholders perceive unacceptable consequences): 2 days
  • Recovery time objective (RTO) (the acceptable amount of time to restore the service in order to avoid unacceptable consequences associated with a break in continuity (this has to be less than MTPoD)): 2 days
  • Recovery point objective (RPO) (the acceptable latency of data that will not be recovered): not applicable

Availability and Continuity test

The proposed A/C test will focus on a recovery scenario: the service has been disrupted and needs to be reinstalled from scratch. The time spent for restoring the service will be measured, using the last backup of the data stored in it and evaluating any eventual loss of information Performing this test will be useful to spot any issue in the recovery procedures of the service.

  • Start of recovery scenario test:
Tue Feb 19 09:09:02 CET 2019 One of the 3 VMs running the AMS service instance has been disrupted and needs to be reinstalled from scratch.
  • End of test recovery scenario:
Tue Feb 19 10:10:01 CET 2019: We assumed a scenario where the second AMS node faced a hardware failure and a new VM had to be spawned in order to restore it
and able to provide the AMS in full high availability/load balancing mode. The process of spawning the new VM started at 09:25 CET (we allocated approximately
15 minutes for simulating the analysis since the detection of the VM failure at 09:09 CET).

It should be noted that during the ~1 hours that the ams node was down the existing user sessions were not affected and new user sessions could be established without any noticeable delay. The test showed that in case of failure of one of the ams node, there is no impact on the users.

Here more details about the test.

Revision History

Version Authors Date Comments

Alessandro Paolini 2019-02-25 first draft

Alessandro Paolini 2019-02-26 added the information about the AC test; plan finalised

Alessandro Paolini 2020-11-18 yearly review completed: no changes in the risk assessment, no need to a new continuity/recovery test, updated the sections "Performance" and "Additional information"