Collaboration Tools Availability and Continuity Plan
|Main||EGI.eu operations services||Support||Documentation||Tools||Activities||Performance||Technology||Catch-all Services||Resource Allocation||Security|
|Documentation menu:||Home •||Manuals •||Procedures •||Training •||Other •||Contact ►||For:||VO managers •||Administrators|
Back to main page: Services Availability Continuity Plans
This page reports on the Availability and Continuity Plan for the EGI Collaboration tools and it is the result of the risks assessment conducted for this service: a series of risks and treats has been identified and analysed, along with the correspondent countermeasures currently in place. Whenever a countermeasure is not considered satisfactory for either avoiding or reducing the likelihood of the occurrence of a risk, or its impact, it is agreed with the service provider a new treatment for improving the availability and continuity of the service. The process is concluded with an availability and continuity test.
|Risks assessment||2019-12-02||2020 December|
|Av/Co plan and test||2018-10-26||--|
Previous plans are collected here: https://documents.egi.eu/secure/ShowDocument?docid=3541
In the OLA it was agreed the following performances targets, on a monthly basis:
- Availability: DNS 99%; other services 95%
- Reliability 99%
Other availability requirements:
- the service is accessible either through X509 certificate or via username/password or via EGI Check-in
- the service is accessible via webUI
The service availability is regularly tested by nagios probe org.nagiosexchange.Portal-WebCheck and org.nagiosexchange.RT-WebCheck: https://argo-mon.egi.eu/nagios/cgi-bin/status.cgi?servicegroup=SITE_GRIDOPS-CTOOLS_egi.Portal&style=overview https://argo-mon.egi.eu/nagios/cgi-bin/status.cgi?servicegroup=SITE_GRIDOPS-CTOOLS_eu.egi.rt&style=overview
Some issues occurred to the Collaboration tools have been notified and investigated, and a plan to improve the service quality is proceeding overseen by EGI Operations.
Risks assessment and management
For more details, please look at the google spreadsheet. We will report here a summary of the assessment.
|Risk id||Risk description||Affected components||Established measures||Risk level||Expected duration of downtime / time for recovery||Comment|
|1||Service unavailable / loss of data due to hardware failure||all services||virtualization on HA platform, backups||Medium||1 or more working day||the measures already in place are considered satisfactory and risk level is acceptable|
|2||Service unavailable / loss of data due to issues with database server (aldor3)||DocDB and Confluence, remaining services are using it's own database||virtualization on HA platform, backups, removing dependency if not necessary||High||up to 8 hours (1 working day)||remove dependency for several critical services (as local DB is fast enough on upgraded HW), move DB VM to VMWare, with HA copy on second VMWare site|
|3||Service unavailable / loss of data due to software failure||all services depends on quality of open source software, but there's no global dependecy on one software compoment (besides linux kernel and distribution)||monitoring of system health, backups||Medium||up to 8 hours (1 working day)||the measures already in place are considered satisfactory and risk level is acceptable|
|4||service unavailable / loss of data due to human error||depends on affected sw/data, probably all services||monitoring of system health, backups, actively maintained documentation (wiki), subscirption to support forums and chat via IRC (software Indico)||Medium||up to 8 hours (1 working day)||the measures already in place are considered satisfactory and risk level is acceptable|
|5||service unavailable for network failure (Network outage with causes external of the site)||all services (could affect only selected users, depends on problematic networks)||monitoring of service availability, alternative network routes||Medium||up to 4 hours (half working day)||the measures already in place are considered satisfactory and risk level is acceptable|
|6||Not enough people for maintaining and operating the service||depends on the problem requiring staff attention, could escalate to all services||contacts to other local staff capable of administering the services||Medium||1 or more working day||the measures already in place are considered satisfactory and risk level is acceptable|
|7||Major disruption in the data centre.||all services||access to other data centres, geographical diverse backups||Medium||1 or more working day||the measures already in place are considered satisfactory and risk level is acceptable|
|8||Major security incident. The system is compromised by external attackers and needs to be reinstalled and restored.||all services||improved monitoring of installed versions, monitoring of network traffic by NREN||Medium||up to 8 hours (1 working day)||the measures already in place are considered satisfactory and risk level is acceptable|
|9||(D)DOS attack. The service is unavailable because of a coordinated DDOS.||all services||leveraging of HA platform features, support of network provider||Medium||up to 4 hours (half working day)||the measures already in place are considered satisfactory and risk level is acceptable|
- procedures for the several countermeasures to invoke in case of risk occurrence are available to the provider
- the Availability targets don't change in case the plan is invoked.
- recovery requirements:
- Maximum tolerable period of disruption (MTPoD) (the maximum amount of time that a service can be unavailable or undelivered after an event that causes disruption to operations, before its stakeholders perceive unacceptable consequences): 1 days
- Recovery time objective (RTO) (the acceptable amount of time to restore the service in order to avoid unacceptable consequences associated with a break in continuity (this has to be less than MTPoD)): 1 day
- Recovery point objective (RPO) (the acceptable latency of data that will not be recovered): 1 days
- approach for the return to normal working conditions as reported in the risk assessment.
There are plans to reduce the level of risk 2) by removing the dependency on aldor3 DB from Confluence and Document Database; moreover it is ongoing a separation process of the several tools by moving them on different machines in order to improve the availability and continuity of the whole service. EGI Operations is following closely the service status with monthly meetings with CESNET team.
Availability and Continuity test
The proposed A/C test will focus on a recovery scenario: the service has been disrupted and needs to be reinstalled from scratch. The time spent for restoring the service will be measured, using the last backup of the data stored in it. Performing this test will be useful to spot any issue in the recovery procedures of the service.
- The recovery process has been tested and it took 26 minutes.
- Backups are created every two days, so in the worst scenario we can lose data two days back.
Outcomes and recommendations: The test on the whole can be considered successful: the recovery time is acceptable, even though we need to evaluate if loosing 2 days data in the worst case can be tolerable. For some services included in the Collaboration Tools we might need an higher backups frequency: we are going to perform a Business Impact Analysis for these services and then we will agree with the providers the necessary updates to the plan.
|Alessandro Paolini||2018-04-25||first draft, discussing with the provider|
|Alessandro Paolini||2018-10-26||recovery test performed, plan finalised|
|Alessandro Paolini||2019-11-25||starting the yearly review....|