Collaboration Tools Availability and Continuity Plan
|Main||EGI.eu operations services||Support||Documentation||Tools||Activities||Performance||Technology||Catch-all Services||Resource Allocation||Security|
|Documentation menu:||Home •||Manuals •||Procedures •||Training •||Other •||Contact ►||For:||VO managers •||Administrators|
Back to main page: Services Availability Continuity Plans
work in progress
Service Availability and continuty plan structure
- Performances (A/R targets agreed in the OLA, comment about the past behaviour and any particular issue)
- summary of the risks assessment
- Av.Co. tests and results
This page reports on the Availability and Continuity Plan for the EGI Collaboration tools and it is the result of the risks assessment conducted for this service: a series of risks and treats has been identified and analysed, along with the correspondent countermeasures currently in place. Whenever a countermeasure is not considered satisfactory for either avoiding or reducing the likelyhood of the occurrence of a risk, or its impact, it is agreed with the service provider a new treatment for improving the availability and continuity of the service. The process is concluded with an availability and continuity test.
|Risks assessment||2018-04-24||2019 April|
|Av/Co plan and test||in progress||--|
The performances reports in terms of Availability and Reliability are produced by ARGO on an almost real time basis and they are also collected every 6 months into the Documentation Database (last period July 2017 - December 2017.
In the OLA it was agreed the following performances targets, on a monthly basis:
- Availability: DNS 99%; other services 95%
- Reliability 99%
Over the past years, the Collaboration tools hadn't particular Av/Co issues highlighted by the performances that need to be further investigated.
Risks assessment and management
For more details, please look at the google spreadsheet. We will report here a summary of the assessment.
|Risk id||Risk description||Affected components||Established measures||Risk level||Expected duration of downtime / time for recovery||Comment|
|1||Service unavailable / loss of data due to hardware failure||all services||virtualization on HA platform, backups||Medium||1 or more working day||the measures already in place are considered satisfactory and risk level is acceptable|
|2||Service unavailable / loss of data due to software failure||depends on affected sw, probably all services||monitoring of system health, backups||Medium||up to 8 hours (1 working day)||the measures already in place are considered satisfactory and risk level is acceptable|
|3||service unavailable / loss of data due to human error||depends on affected sw/data, probably all services||monitoring of system health, backups, actively maintained documentation (wiki)||Medium||up to 8 hours (1 working day)||the measures already in place are considered satisfactory and risk level is acceptable|
|4||service unavailable for network failure (Network outage with causes external of the site)||all services (could affect only selected users, depends on problematic networks)||monitoring of service availability, alternative network routes||Low||up to 4 hours (half working day)||the measures already in place are considered satisfactory and risk level is acceptable|
|5||Unavailability of key technical and support staff (holidays period, sickness, ...)||depends on the problem requiring staff attention, could escalate to all services||contacts to other local staff capable of administering the services||Medium||1 or more working day||the measures already in place are considered satisfactory and risk level is acceptable|
|5||Major disruption in the data centre. Fire, flood or electric failure for example||Cluster web / Databases||The computing centre has electric backup system and fire control devices||Medium||less than 1 hour||the measures already in place are considered satisfactory and risk level is acceptable|
|6||Major security incident. The system is compromised by external attackers and needs to be reinstalled and restored.||Cluster web / Databases / Lavoisier||The backend database store is operated in clustered mode with backups. The code is stored in Gitlab and stored in backup files . Lavoisier is stateless and does not store historical datas. So there is no problem to start a new Operations Portal from backup and new machines.||Medium||1 working day||the measures already in place are considered satisfactory and risk level is acceptable|
|7||(D)DOS attack. The service is unavailable because of a coordinated DDOS.||Cluster web / Databases||RENATER and local team network provides protection for DOS attacks, firewall can limit impact of the DDoS||Medium||1 working day||the measures already in place are considered satisfactory and risk level is acceptable|
Countermeasures to improve
The risk number 4 doesn't have countermeasures in place and needs to be handled.
The provider will try to avoid, where possible, that the people in the staff take vacations at the same time. All the staff can access remotely to the services in case of need.
Availability and Continuity test
The proposed A/C test will focus on a recovery scenario: the service has been disrupted and needs to be reinstalled from scratch. The time for restoring the service will be measured, using the last backup of the data stored in it (how much information would be lost?)(risks 2,3, and 6?)(how many other operational tool would be affected?).
Performing this test will be useful to spot any issue in the recovery procedures of the service.
|Alessandro Paolini||2018-04-25||first draft, discussing with the provider|