Difference between revisions of "ARGOMS Availability and Continuity Plan"
Line 123: | Line 123: | ||
Tue Feb 19 09:09:02 CET 2019 One of the 3 VMs running the AMS service instance has been disrupted and needs to be reinstalled from scratch. | Tue Feb 19 09:09:02 CET 2019 One of the 3 VMs running the AMS service instance has been disrupted and needs to be reinstalled from scratch. | ||
*End of test recovery scenario: | *End of test recovery scenario: | ||
Tue Feb 19 10:10:01 CET 2019: We assumed a scenario where the second AMS node faced a hardware | Tue Feb 19 10:10:01 CET 2019: We assumed a scenario where the second AMS node faced a hardware failure and a new VM had to be spawned in order to restore it and able to provide the AMS in full high availability/load balancing mode. The process of spawning the new VM started at 09:25 CET (we allocated approx. 15 minutes for simulating the analysis since the detection of the VM failure at 09:09 CET). | ||
failure and a new VM had to be spawned in order to restore it and able to provide the AMS in full high | |||
availability/load balancing mode. The process of spawning the new VM started at 09:25 CET (we allocated | |||
approx. 15 minutes for simulating the analysis since the detection of the VM failure at 09:09 CET). | |||
It should be noted that during the ~1 hours that the ams node was down the existing user sessions were | It should be noted that during the ~1 hours that the ams node was down the existing user sessions were | ||
not affected and new user sessions could be established without any noticeable delay. | not affected and new user sessions could be established without any noticeable delay. |
Revision as of 18:01, 26 February 2019
Main | EGI.eu operations services | Support | Documentation | Tools | Activities | Performance | Technology | Catch-all Services | Resource Allocation | Security |
Documentation menu: | Home • | Manuals • | Procedures • | Training • | Other • | Contact ► | For: | VO managers • | Administrators |
Back to main page: Services Availability Continuity Plans
Introduction
This page reports on the Availability and Continuity Plan for ARGO Messaging Service and it is the result of the risk assessment conducted for this service: a series of risks and threats has been identified and analysed, along with the correspondent countermeasures currently in place. Whenever a countermeasure is not considered satisfactory for either avoiding or reducing the likelihood of the occurrence of a risk, or its impact, it is agreed with the service provider a new treatment for improving the availability and continuity of the service. The process is concluded with an availability and continuity test.
Last | Next | |
---|---|---|
Risk assessment | 2019-02-25 | - |
Av/Co plan and test | - | - |
Performances
The performances reports in terms of Availability and Reliability are produced by ARGO on an almost real time basis and they are also periodically collected into the Documentation Database. In the OLA it was agreed the following performances targets, on a monthly basis:
- Availability: 98%
- Reliability 98%
Risks assessment and management
For more details, please look at the google spreadsheet. We will report here a summary of the assessment.
Risks analysis
Risk id | Risk description | Affected components | Established measures | Risk level | Treatment | Expected duration of downtime / time for recovery |
---|---|---|---|---|---|---|
1 | Service unavailable / loss of data due to hardware failure | Argo Messaging Service | Automated Deployment and Backup Procedures that allow for rapid redeployment of affected components | Low | Redirect DNS to poin to one the 3 instances runinng behind the HA/LB Service while we reinstall/migrate the Service in a new VM and update DNS | up to 8 hours |
2 | Service unavailable / loss of data due to software failure | Argo Messaging Service | Automated Deployment and Backup Procedures that allow for rapid redeployment of affected components | Low | Redirect DNS to poin to one the 3 instances runinng behind the HA/LB Service while we reinstall/migrate the Service in a new VM and update DNS | up to 8 hours |
3 | service unavailable / loss of data due to human error | Argo Messaging Service | Automated Deployment and Backup Procedures that allow for rapid redeployment of affected components | Low | Redirect DNS to poin to one the 3 instances runinng behind the HA/LB Service while we reinstall/migrate the Service in a new VM and update DNS | Up to 8 hours |
4 | service unavailable for network failure (Network outage with causes external of the site) | Argo Messaging Service | Automated Deployment and Backup Procedures that allow for rapid redeployment of affected components | Low | Redirect DNS to poin to one the 3 instances runinng behind the HA/LB Service while we reinstall/migrate the Service in a new VM and update DNS | up to 8 hours |
5 | Unavailability of key technical and support staff (holidays period, sickness, ...) | Argo Messaging Service | There is always one member of the team on call. | Low | Redirect DNS to poin to one the 3 instances runinng behind the HA/LB Service while we reinstall/migrate the Service in a new VM and update DNS | up to 8 hours |
6 | Major disruption in the data centre. Fire, flood or electric failure for example | Argo Messaging Service | Automated Deployment and Backup Procedures that allow for rapid redeployment of affected components | Medium | Redirect DNS to poin to one the 3 instances runinng behind the HA/LB Service while we reinstall/migrate the Service in a new VM and update DNS | 1 or more working days |
7 | Major security incident. The system is compromised by external attackers and needs to be reinstalled and restored. | Argo Messaging Service | Automated Deployment and Backup Procedures that allow for rapid redeployment of affected components | Low | Redirect DNS to poin to one the 3 instances runinng behind the HA/LB Service while we reinstall/migrate the Service in a new VM and update DNS | 1 or more working days |
8 | (D)DOS attack. The service is unavailable because of a coordinated DDOS. | Argo Messaging Service | Automated Deployment and Backup Procedures that allow for rapid redeployment of affected components | Low | Redirect DNS to poin to one the 3 instances runinng behind the HA/LB Service while we reinstall/migrate the Service in a new VM and update DNS | up to 8 hours |
Outcome
The level of all the identified risks is acceptable and the countermeasures already adopted are considered satisfactory
Availability and Continuity test
The proposed A/C test will focus on a recovery scenario: the service has been disrupted and needs to be reinstalled from scratch. The time spent for restoring the service will be measured, using the last backup of the data stored in it and evaluating any eventual loss of information Performing this test will be useful to spot any issue in the recovery procedures of the service.
- Start of recovery scenario test:
Tue Feb 19 09:09:02 CET 2019 One of the 3 VMs running the AMS service instance has been disrupted and needs to be reinstalled from scratch.
- End of test recovery scenario:
Tue Feb 19 10:10:01 CET 2019: We assumed a scenario where the second AMS node faced a hardware failure and a new VM had to be spawned in order to restore it and able to provide the AMS in full high availability/load balancing mode. The process of spawning the new VM started at 09:25 CET (we allocated approx. 15 minutes for simulating the analysis since the detection of the VM failure at 09:09 CET).
It should be noted that during the ~1 hours that the ams node was down the existing user sessions were not affected and new user sessions could be established without any noticeable delay. The test showed that in case of failure of one of the ams node, there is no impact on the users .
Revision History
Version | Authors | Date | Comments |
---|---|---|---|
Alessandro Paolini | 2019-02-25 | first draft | |