Alert.png The wiki is deprecated and due to be decommissioned by the end of September 2022.
The content is being migrated to other supports, new updates will be ignored and lost.
If needed you can get in touch with EGI SDIS team using operations @ egi.eu.

Difference between revisions of "EGI Workload Manager Availability and Continuity Plan"

From EGIWiki
Jump to navigation Jump to search
Line 22: Line 22:
|-
|-
|}
|}
Previous plans are collected here: https://documents.egi.eu/document/3597


= Availability requirements and performances =
= Availability requirements and performances =

Revision as of 11:32, 11 May 2021

Main EGI.eu operations services Support Documentation Tools Activities Performance Technology Catch-all Services Resource Allocation Security


Documentation menu: Home Manuals Procedures Training Other Contact For: VO managers Administrators


Back to main page: Services Availability Continuity Plans

Introduction

This page reports on the Availability and Continuity Plan for EGI Workload Manager - DIRAC4EGI and it is the result of the risks assessment conducted for this service: a series of risks and treats has been identified and analysed, along with the correspondent countermeasures currently in place. Whenever a countermeasure is not considered satisfactory for either avoiding or reducing the likelihood of the occurrence of a risk, or its impact, it is agreed with the service provider a new treatment for improving the availability and continuity of the service. The process is concluded with an availability and continuity test.

Last Next
Risks assessment 2019-01-10 2021 Apr
Av/Co plan and test 2020-04-14 2021 Apr

Previous plans are collected here: https://documents.egi.eu/document/3597

Availability requirements and performances

In the OLA it was agreed the following performances targets, on a monthly basis:

  • Availability: 99%
  • Reliability 99%

Other availability requirements: - the service is accessible through either X509 certificate or OAuth2 IdP (upcoming with the new release) - The service is accessible via CLI and webUI (link to the probes that checks the authentication and the service in general)

The service availability is regularly tested by nagios probe org.nagiosexchange.Portal-WebCheck: https://argo-mon.egi.eu/nagios/cgi-bin/status.cgi?host=dirac.egi.eu

The performances reports in terms of Availability and Reliability are produced by ARGO on an almost real time basis and they are also periodically collected into the Documentation Database.

Over the past years, the Workload Manager hadn't particular Av/Co issues highlighted by the performances that need to be further investigated.

Risks assessment and management

For more details, please look at the google spreadsheet. We will report here a summary of the assessment.

Risks analysis

 to update 
Risk id Risk description Affected components Established measures Risk level Expected duration of downtime / time for recovery Comment
1 Service unavailable / loss of data due to hardware failure All the service components Database protection with regular backups. Redundant Configuration database. Snapshots of virtual machines hosting DIRAC4EGI services Medium 1 working day after the hardware failure recovery the measures already in place are considered satisfactory and risk level is acceptable
2 Service unavailable / loss of data due to software failure All the service components Backup of the MySQL and Configuration databases Low 1 working day the measures already in place are considered satisfactory and risk level is acceptable
3 service unavailable / loss of data due to human error All the service components Backup of the MySQL and Configuration databases Low 1 working day the measures already in place are considered satisfactory and risk level is acceptable
4 service unavailable for network failure (Network outage with causes external of the site) All the service components Geographically distributed redundant Configuration Service. Redundant failover Request Management Service. Low 1 hour after the network recovery the measures already in place are considered satisfactory and risk level is acceptable
5 Unavailability of key technical and support staff (holidays period, sickness, ...) Resources management. User support. Security infrastructure components Automation of synchronization with BDII, VOMS, GocDB information indices. Automated resource monitoring service Low 1 or more working days the measures already in place are considered satisfactory and risk level is acceptable
6 Major disruption in the data centre. Fire, flood or electric failure for example All the service components Backup of the MySQL and Configuration databases Low several weeks the measures already in place are considered satisfactory and risk level is acceptable
7 Major security incident. The system is compromised by external attackers and needs to be reinstalled and restored. All the service components Backup of the MySQL and Configuration databases Low 1 or more working days the measures already in place are considered satisfactory and risk level is acceptable
8 (D)DOS attack. The service is unavailable because of a coordinated DDOS. All the service components Limited service queries queues avoiding dangerous overloading of the service components. Automatic service restart after going down due to an overload. Low 1 hour the measures already in place are considered satisfactory and risk level is acceptable

Outcome

 to update 

The level of all the identified risks is acceptable and the countermeasures already adopted are considered satisfactory

Additional Information

 to update 
  • There aren't special procedures to invoke in case of risk occurrence but the general administrator guide and generic internal procedures
  • the Availability targets don't change in case the plan is invoked.
  • recovery requirements:
    • Maximum tolerable period of disruption (MTPoD) (the maximum amount of time that a service can be unavailable or undelivered after an event that causes disruption to operations, before its stakeholders perceive unacceptable consequences): 5 day
    • Recovery time objective (RTO) (the acceptable amount of time to restore the service in order to avoid unacceptable consequences associated with a break in continuity (this has to be less than MTPoD)): 5 days
    • Recovery point objective (RPO) (the acceptable latency of data that will not be recovered): n/a
  • approach for the return to normal working conditions as reported in the risk assessment.
  • The dedicated GGUS Support Unit will be used to report any incident or service request.
  • The providers can contact EGI Operations via ticket or email in case the continuity plan is invoked, or to discuss any change to it.

Availability and Continuity test

 to update 

The proposed A/C test will focus on a recovery scenario: the service is supposed to have been disrupted and needs to be reinstalled from scratch. Typically this covers the risks 1,2, and 7. The last backup of the data will be used for restoring the service, verifying how much information will be lost, and the time spent will be measured.

Performing this test will be useful to spot any issue in the recovery procedures of the service.

Test details

More details available on https://documents.egi.eu/document/3597

Test case Simulation Recover time Actions Status
Service/Agent crash that can be caused by some transaction failure, software error or human error Kill the component process few seconds The component is restarted automatically by the system monitoring facility PASS
Host failure, for example due to a power cut reboot dirac4.grid.cyfronet.pl server few minutes The host rebooting sequence contains an automatic restart of all configured DIRAC components by using supervisord. PASS
Installed software corruption reinstall DIRAC software stack from scratch 10 - 15 minutes Manual intervention: running dirac-install installer tool; verify that all the components properly restart. PASS
Configuration files loss or corruption, for example, due to a hard disk failure. BackUps of the local configuration files in a database or on another server few minutes Replace the lost configuration with a backup copy. PASS
DB corruption and/or crash recover from dump 5 - 30 minutes Manual intervention by the CYFRONET database service administrators PASS

Test outcome

The test can be considered successful: the service can be restored in few time if hardware, software or database failures occur.

Revision History

Version Authors Date Comments

Alessandro Paolini 2019-01-10 first draft, discussing with the provider

Alessandro Paolini 2019-08-27 adding other availability requirements, and additional information for the risk assessment
Alessandro Paolini 2019-11-25 page updated with additional availability requirements, and additional information section. Waiting for the recovery test, hopefully to be done in January.
Alessandro Paolini 2020-04-14 added details about the recovery test provided by the supplier. Plan finalised.
Alessandro Paolini 2021-05-11 starting the yearly review. https://ggus.eu/index.php?mode=ticket_info&ticket_id=151951