Alert.png The wiki is deprecated and due to be decommissioned by the end of September 2022.
The content is being migrated to other supports, new updates will be ignored and lost.
If needed you can get in touch with EGI SDIS team using operations @ egi.eu.

Difference between revisions of "ARGOMS Availability and Continuity Plan"

From EGIWiki
Jump to navigation Jump to search
(Created page with "{{Template:Op menubar}} {{Template:Doc_menubar}} {{TOC_right}} Category:Operations Back to main page: Services Availability Continuity Plans = Introduction = This page...")
 
Line 47: Line 47:
| 1
| 1
| Service unavailable / loss of data due to hardware failure
| Service unavailable / loss of data due to hardware failure
| All
| Argo Messaging Service
| "All services/data sturctures are running on virtual machines and in high availability mode.  
| "All services/data sturctures are running on virtual machines and in high availability mode.  
In case of software failure on a virtual machine the back up installation will take over. If the software is unavailable in all instances the a new virtual machine would be spawned in GRNET's private environment. High availability deployment will reduce outages due to software failures to almost zero."
In case of software failure on a virtual machine the back up installation will take over. If the software is unavailable in all instances the a new virtual machine would be spawned in GRNET's private environment. High availability deployment will reduce outages due to software failures to almost zero."
Line 56: Line 56:
| 2
| 2
| Service unavailable / loss of data due to software failure
| Service unavailable / loss of data due to software failure
| Load balancer, IdP/SP proxy, Master Portal
| Argo Messaging Service
| "All services/data structures are running on virtual machines and in high availability mode.  
| "All services/data structures are running on virtual machines and in high availability mode.  
In case of software failure on a virtual machine the back up installation will take over. If the software is unavailable in all instances the a new virtual machine would be spawned in GRNET's private environment. High availability deployment will reduce outages due to human error failures to almost zero."
In case of software failure on a virtual machine the back up installation will take over. If the software is unavailable in all instances the a new virtual machine would be spawned in GRNET's private environment. High availability deployment will reduce outages due to human error failures to almost zero."
Line 65: Line 65:
| 3
| 3
| service unavailable / loss of data due to human error
| service unavailable / loss of data due to human error
| Load balancer, IdP/SP proxy, Master Portal
| Argo Messaging Service
| "All services/data structures are running on virtual machines and in high availability mode.  
| "All services/data structures are running on virtual machines and in high availability mode.  
In case of software failure on a virtual machine the back up installation will take over. If the software is unavailable in all instances the a new virtual machine would be spawned in GRNET's private environment. High availability deployment will reduce outages due to human error failures to almost zero."
In case of software failure on a virtual machine the back up installation will take over. If the software is unavailable in all instances the a new virtual machine would be spawned in GRNET's private environment. High availability deployment will reduce outages due to human error failures to almost zero."
Line 74: Line 74:
| 4
| 4
| service unavailable for network failure (Network outage with causes external of the site)
| service unavailable for network failure (Network outage with causes external of the site)
| Load balancer, IdP/SP proxy, Master Portal
| Argo Messaging Service
| GRNET has redundant network connectivity
| GRNET has redundant network connectivity
| style="background: green"| Low  
| style="background: green"| Low  
Line 82: Line 82:
| 5
| 5
| Unavailability of key technical and support staff (holidays period, sickness, ...)
| Unavailability of key technical and support staff (holidays period, sickness, ...)
| Load balancer, IdP/SP proxy, Master Portal
| Argo Messaging Service
| At least one person from the core team (5 team members) is still available.
| At least one person from the core team (5 team members) is still available.
| style="background: green"| Low  
| style="background: green"| Low  
Line 90: Line 90:
| 6
| 6
| Major disruption in the data centre. Fire, flood  or electric failure for example
| Major disruption in the data centre. Fire, flood  or electric failure for example
| Load balancer, IdP/SP proxy, Master Portal
| Argo Messaging Service
| The computing centre has electric backup system and fire control devices. In case of an occurrence despite the controls, the virtual machine can be instantiated elsewhere.
| The computing centre has electric backup system and fire control devices. In case of an occurrence despite the controls, the virtual machine can be instantiated elsewhere.
| style="background: green"| Low  
| style="background: green"| Low  
Line 98: Line 98:
| 7
| 7
| Major security incident. The system is compromised by external attackers and needs to be reinstalled and restored.
| Major security incident. The system is compromised by external attackers and needs to be reinstalled and restored.
| All
| Argo Messaging Service
| The backend database store is operated in clustered mode, supporting streaming replication and Point-in-Time Recovery for a period of six months (minimum). Daily backups are also executed. Backups are stored in a separate system and can be restored at once, losing up to 24 hours of data.
| The backend database store is operated in clustered mode, supporting streaming replication and Point-in-Time Recovery for a period of six months (minimum). Daily backups are also executed. Backups are stored in a separate system and can be restored at once, losing up to 24 hours of data.
| style="background: green"| Low  
| style="background: green"| Low  
Line 106: Line 106:
| 8
| 8
| (D)DOS attack. The service is unavailable because of a coordinated DDOS.
| (D)DOS attack. The service is unavailable because of a coordinated DDOS.
| Load balancer, IdP/SP proxy, Master Portal
| Argo Messaging Service
| GRNET network provides protection for DOS attacks, firewall can limit impact of the DDoS
| GRNET network provides protection for DOS attacks, firewall can limit impact of the DDoS
| style="background: green"| Low   
| style="background: green"| Low   

Revision as of 17:33, 25 February 2019

Main EGI.eu operations services Support Documentation Tools Activities Performance Technology Catch-all Services Resource Allocation Security


Documentation menu: Home Manuals Procedures Training Other Contact For: VO managers Administrators


Back to main page: Services Availability Continuity Plans

Introduction

This page reports on the Availability and Continuity Plan for ARGO Messaging Service and it is the result of the risk assessment conducted for this service: a series of risks and threats has been identified and analysed, along with the correspondent countermeasures currently in place. Whenever a countermeasure is not considered satisfactory for either avoiding or reducing the likelihood of the occurrence of a risk, or its impact, it is agreed with the service provider a new treatment for improving the availability and continuity of the service. The process is concluded with an availability and continuity test.

Last Next
Risk assessment 2019-02-25 -
Av/Co plan and test - -

Performances

The performances reports in terms of Availability and Reliability are produced by ARGO on an almost real time basis and they are also periodically collected into the Documentation Database. In the OLA it was agreed the following performances targets, on a monthly basis:

  • Availability: 98%
  • Reliability 98%

Risks assessment and management

For more details, please look at the google spreadsheet. We will report here a summary of the assessment.

Risks analysis

Risk id Risk description Affected components Established measures Risk level Treatment Expected duration of downtime / time for recovery
1 Service unavailable / loss of data due to hardware failure Argo Messaging Service "All services/data sturctures are running on virtual machines and in high availability mode.

In case of software failure on a virtual machine the back up installation will take over. If the software is unavailable in all instances the a new virtual machine would be spawned in GRNET's private environment. High availability deployment will reduce outages due to software failures to almost zero."

Low Restore service from the backups on the new hosts which are not affected by the failure. Almost Zero
2 Service unavailable / loss of data due to software failure Argo Messaging Service "All services/data structures are running on virtual machines and in high availability mode.

In case of software failure on a virtual machine the back up installation will take over. If the software is unavailable in all instances the a new virtual machine would be spawned in GRNET's private environment. High availability deployment will reduce outages due to human error failures to almost zero."

Medium Restore service from the backups, log data analysis in order to find a reason of the software failure. Zero for the case of partial failure. 3-4 work hours if a new virtual machine need to be spawned.
3 service unavailable / loss of data due to human error Argo Messaging Service "All services/data structures are running on virtual machines and in high availability mode.

In case of software failure on a virtual machine the back up installation will take over. If the software is unavailable in all instances the a new virtual machine would be spawned in GRNET's private environment. High availability deployment will reduce outages due to human error failures to almost zero."

Medium Restore service/data from backups. Revise actions done which led to the failure and modification of the processes in order to not allow the same error again. Up to 8 hours
4 service unavailable for network failure (Network outage with causes external of the site) Argo Messaging Service GRNET has redundant network connectivity Low Standard mitigation and recovery procedures made by network operators. Almost Zero. 3-4 working hours in case of maintenance
5 Unavailability of key technical and support staff (holidays period, sickness, ...) Argo Messaging Service At least one person from the core team (5 team members) is still available. Low 1 or more working days
6 Major disruption in the data centre. Fire, flood or electric failure for example Argo Messaging Service The computing centre has electric backup system and fire control devices. In case of an occurrence despite the controls, the virtual machine can be instantiated elsewhere. Low Restore service from the backups on the new hosts which are not affected by the failure. 1 or more working days
7 Major security incident. The system is compromised by external attackers and needs to be reinstalled and restored. Argo Messaging Service The backend database store is operated in clustered mode, supporting streaming replication and Point-in-Time Recovery for a period of six months (minimum). Daily backups are also executed. Backups are stored in a separate system and can be restored at once, losing up to 24 hours of data. Low Restore service from backups on a new hosts. 3-4 work hours. In case new host certificates are required, up to 1 day.
8 (D)DOS attack. The service is unavailable because of a coordinated DDOS. Argo Messaging Service GRNET network provides protection for DOS attacks, firewall can limit impact of the DDoS Low Utilization of DDOS protection on the network level provided by GRNET. Depending on the attack, few hours maximum

Outcome

The level of all the identified risks is acceptable and the countermeasures already adopted are considered satisfactory

Availability and Continuity test

The proposed A/C test will focus on a recovery scenario: the service has been disrupted and needs to be reinstalled from scratch. The time spent for restoring the service will be measured, using the last backup of the data stored in it and evaluating any eventual loss of information Performing this test will be useful to spot any issue in the recovery procedures of the service.

to do

Revision History

Version Authors Date Comments

Alessandro Paolini 2019-02-25 first draft