Data Transfer Availability and Continuity Plan
|Main||EGI.eu operations services||Support||Documentation||Tools||Activities||Performance||Technology||Catch-all Services||Resource Allocation||Security|
|Documentation menu:||Home •||Manuals •||Procedures •||Training •||Other •||Contact ►||For:||VO managers •||Administrators|
Back to main page: Services Availability Continuity Plans
This page reports on the Availability and Continuity Plan for the Data Transfer and it is the result of the risks assessment conducted for this service: a series of risks and treats has been identified and analyzed, along with the correspondent countermeasures currently in place. Whenever a countermeasure is not considered satisfactory for either avoiding or reducing the likelihood of the occurrence of a risk, or its impact, it is agreed with the service provider a new treatment for improving the availability and continuity of the service. The process is concluded with an availability and continuity test.
|Risks assessment||Nov 2021||Nov 2022|
|Av/Co plan||2022-01-20||Q1 2023|
previous plans are collected here: https://documents.egi.eu/document/3851
Hardware HA Configuration
- CERN instance:
- FTS3 is deployed as a load-balanced alias across a number of machines (4 at time of writing).
- WebFTS is a single instance.
- UKRI instance:
- FTS3 service is provided as a HA Proxy load-balanced alias across a pool of servers
Availability requirements and performances
In the OLA it was agreed the following performances targets, on a monthly basis:
- Availability: 99% (CERN), 95% (UKRI)
- Reliability: 99% (CERN), 97% (UKRI)
Other availability requirements:
- Users require a personal certificate to use the service, and it can be accessed either via command line, the WebFTS interface, or through Rucio if also using that as a data management service
- The work on adding token integration is ongoing, so that EGI-Checkin or IAM can be used to access the service in the future.
- the service is accessible through X509 certificate and/or other authentication system
The service availability is regularly tested by nagios probes ch.cern.FTS3-Service, ch.cern.FTS3-StalledTransfers, eu.egi.FTS3-CertValidity , eu.egi.FTS3-IGTF: https://argo-mon.egi.eu/nagios/cgi-bin/status.cgi?servicegroup=SERVICE_eu.egi.datatransfer.fts&style=overview
Over the past years, the EGI Data Transfer hadn't particular Av/Co issues highlighted by the performances that need to be further investigated.
Risks assessment and management
For more details, please look at the google spreadsheet. We will report here a summary of the assessment.
|Risk id||Risk description||Affected components||Established measures||Risk level||Expected duration of downtime / time for recovery||Comment|
|1||Service unavailable / loss of data due to hardware failure||All components||Low||the measures already in place are considered satisfactory and risk level is acceptable|
|2||Service unavailable / loss of data due to software failure||All components||Low||the measures already in place are considered satisfactory and risk level is acceptable|
|3||service unavailable / loss of data due to human error||All components||Reduce likelyhood: Access to production systems is limited, documentantion regarding the system is stored internally. System managment training is required for those running the system.
Reduce impact: Have a test instance that can be used as a back up for the production instance if necessary. Also backups and snapshots should be made of the current configuration before implementing any changes.
|Low||0.5 days||the measures already in place are considered satisfactory and risk level is acceptable|
|4||service unavailable for network failure (Network outage with causes external of the site)||All components||Reduce the likelyhood: load balancers are in a highly available pair on seperate VMWare Hypervisors. Site has redundant links to geographically different paths.
Reduce the impact: load balancers are under configuration management, making some errors easier to roll back.
|Low||Duration of network outage||the measures already in place are considered satisfactory and risk level is acceptable|
|5||Unavailability of key technical and support staff (holidays period, sickness, ...)||All components||Reduce the likelyhood: Try to avaoid key members of staff being on leave at the same time for prolonged periods. Assign a cover person to handle tickets for short term absences.||Low||Duration of staff absence.||the measures already in place are considered satisfactory and risk level is acceptable|
|6||Major disruption in the data centre. Fire, flood or electric failure for example||All components||Follow current site and machine room procedures.
Make hardware secure (once it is safe to do so) and assess wether it is safe to restart. Notify the affected parties of the incident and the current assessment of the scale of the incident. Reduce the likelihood: The data center is managed by a team of experts.
|Low||1 day or more||the measures already in place are considered satisfactory and risk level is acceptable|
|7||Major security incident. The system is compromised by external attackers and needs to be reinstalled and restored.||All components||Reduce the likelyhood: The majority of the FTS components are not publicly accessible.
Reduce the impact: Use of configuration management system means reinstalling, restoring or deploying a new instance of FTS from a known safe state can be done quickly.
|Low||1 day||the measures already in place are considered satisfactory and risk level is acceptable|
|8||(D)DOS attack. The service is unavailable because of a coordinated DDOS.||All components||Low||1 day or more||the measures already in place are considered satisfactory and risk level is acceptable|
The level of all the identified risks is acceptable and the countermeasures already adopted are considered satisfactory
- procedures for the several countermeasures to invoke in case of risk occurrence are available internally to the provider.
- The general FTS documentation which can also be used: https://fts3-docs.web.cern.ch/fts3-docs/index.html
- The Availability targets don't change in case the plan is invoked.
Recovery requirements (in general they are the outcomes of the BIA).
- Maximum tolerable period of disruption (MTPoD) (the maximum amount of time that a service can be unavailable or undelivered after an event that causes disruption to operations, before its stakeholders perceive unacceptable consequences): 4 days
- Recovery time objective (RTO) (the acceptable amount of time to restore the service in order to avoid unacceptable consequences associated with a break in continuity (this has to be less than MTPoD)): 3 days
- Recovery point objective (RPO) (the acceptable latency of data that will not be recovered): 3 days
- Approach for the return to normal working conditions as reported in the risk assessment.
- The support unit Data Transfer shall be used to report any incident or service request
- The providers can contact EGI Operations via ticket or email in case the continuity plan is invoked, or to discuss any change to it.
Availability and Continuity test
Considering the Maximum tolerable period of disruption (MTPoD), performing an availability and continuity test is not required. As described in the risk assessment, the several countermeasures in place allow to restore the service within a reasonable amount of time.
|Renato Santana||2021-10-28||first draft|
|Renato Santana, Alessandro Paolini||2021-12-01||Risk Analysis and other information filled in|
|Alessandro Paolini||2022-01-20||updated the performance and the additional information section; continuity test not needed; plan finalised.|