Alert.png The wiki is deprecated and due to be decommissioned by the end of September 2022.
The content is being migrated to other supports, new updates will be ignored and lost.
If needed you can get in touch with EGI SDIS team using operations @ egi.eu.

Difference between revisions of "EGI Workload Manager Availability and Continuity Plan"

From EGIWiki
Jump to navigation Jump to search
 
(46 intermediate revisions by 2 users not shown)
Line 1: Line 1:
{{Template:Op menubar}} {{Template:Doc_menubar}} {{TOC_right}}  
{{Template:Op menubar}} {{Template:Doc_menubar}} {{TOC_right}}  
[[Category:Operations]]
[[Category:Operations]]
[[Category:Deprecated]]
{| style="border:1px solid black; background-color:lightgrey; color: black; padding:5px; font-size:140%; width: 90%; margin: auto;"
| style="padding-right: 15px; padding-left: 15px;" |
|[[File:Alert.png]] This page is '''Deprecated'''; the content has been moved to https://confluence.egi.eu/display/SUPWORKMAN/EGI+Workload+Manager+Availability+and+Continuity+plan 
|}
Back to main page: [[Services Availability Continuity Plans]]
Back to main page: [[Services Availability Continuity Plans]]


Line 14: Line 20:
|-
|-
! scope="row"| Risks assessment
! scope="row"| Risks assessment
| 2019-01-10
| 2021-05-28
| 2020 Jan
| 2022 June
|-
|-
! scope="row"| Av/Co plan and test
! scope="row"| Av/Co plan and test
|  
| 2021-05-31
|  
| 2022 June
|-
|-
|}
|}


= Performances =
Previous plans are collected here: https://documents.egi.eu/document/3597


The performances reports in terms of Availability and Reliability are produced by [http://argo.egi.eu/ar-opsmon ARGO] on an almost real time basis and they are also periodically collected into the [https://documents.egi.eu/public/ShowDocument?docid=2324 Documentation Database].
= Availability requirements and performances =


In the OLA it was agreed the following performances targets, on a monthly basis:
In the OLA it was agreed the following performances targets, on a monthly basis:
Line 31: Line 37:
*Reliability 99%
*Reliability 99%


Over the past years, DIRAC4EGI hadn't particular Av/Co issues highlighted by the performances that need to be further investigated.
Other availability requirements:
- the service is accessible through either X509 certificate or OAuth2 IdP (upcoming with the new release)
- The service is accessible via CLI and webUI (link to the probes that checks the authentication and the service in general)
 
The service availability is regularly tested by nagios probe org.nagiosexchange.Portal-WebCheck: https://argo-mon.egi.eu/nagios/cgi-bin/status.cgi?host=dirac.egi.eu
 
The performances reports in terms of Availability and Reliability are produced by [http://egi.ui.argo.grnet.gr/egi/OPS-MONITOR-Critical ARGO] on an almost real time basis and they are also periodically collected into the [https://documents.egi.eu/public/ShowDocument?docid=2324 Documentation Database].
 
Over the past years, the Workload Manager hadn't particular Av/Co issues highlighted by the performances that need to be further investigated.


= Risks assessment and management =
= Risks assessment and management =
Line 51: Line 65:
| Service unavailable / loss of data due to hardware failure
| Service unavailable / loss of data due to hardware failure
| All the service components
| All the service components
| All services are running on virtual machines. In case of hardware failure of the host machine the virtual machine can be re-instantiated in another hypervisor in the private cloud. Daily backups of the service including database data.
| Database protection with regular backups. Redundant Configuration database. Snapshots of virtual machines hosting DIRAC4EGI services
| style="background: green"| Low
| style="background: yellow"| Medium
| In case an instance must be instantiated from backups can take up to two-three working hours.
| 1 working day after the hardware failure recovery
In case latest backups are not available some data must be re-generate from the central accounting repositories and this may take up to two hours.
| the measures already in place are considered satisfactory and risk level is acceptable
| the measures already in place are considered satisfactory and risk level is acceptable
|-
|-
Line 60: Line 73:
| Service unavailable / loss of data due to software failure
| Service unavailable / loss of data due to software failure
| All the service components
| All the service components
| Restoring of the codebase via git repository
| Backup of the MySQL and Configuration databases
| style="background: green"| Low
| style="background: yellow"| Medium
| One to two working hours
| 1 working day
| the measures already in place are considered satisfactory and risk level is acceptable
| the measures already in place are considered satisfactory and risk level is acceptable
|-
|-
Line 68: Line 81:
| service unavailable / loss of data due to human error
| service unavailable / loss of data due to human error
| All the service components
| All the service components
| Restoring of the codebase via git repository, restore of backup of virtual machine, restoring of data from SSM services.
| Backup of the MySQL and Configuration databases
| style="background: green"| Low
| style="background: yellow"| Medium
| Two to Three working hours.
| 1 working day
| the measures already in place are considered satisfactory and risk level is acceptable
| the measures already in place are considered satisfactory and risk level is acceptable
|-
|-
Line 76: Line 89:
| service unavailable for network failure (Network outage with causes external of the site)
| service unavailable for network failure (Network outage with causes external of the site)
| All the service components
| All the service components
| CESGA has redundant network connectivity to the NREN
| Geographically distributed redundant Configuration Service. Redundant failover Request Management Service.
| style="background: green"| Low  
| style="background: green"| Low  
| Close to zero, less than one hour.
| 1 hour after the network recovery
| the measures already in place are considered satisfactory and risk level is acceptable
| the measures already in place are considered satisfactory and risk level is acceptable
|-
|-
| 5
| 5
| Unavailability of key technical and support staff (holidays period, sickness, ...)
| Unavailability of key technical and support staff (holidays period, sickness, ...)
| All
| Resources management. User support. Security infrastructure components
| More personnel have been involved in the operation of the Accounting portal, this ensures actions taken within the OLA goals every working day. There is also internal documentation with management procedures and portal architecture.
| Automation of synchronization with BDII, VOMS, GocDB information indices. Automated resource monitoring service
| style="background: green"| Low  
| style="background: green"| Low  
| Within the OLA targets for operational actions
| 1 or more working days
Longer periods in case of bugs or maintenance (one week) because not all the personnel can develop patches to the code.
| the measures already in place are considered satisfactory and risk level is acceptable
| the measures already in place are considered satisfactory and risk level is acceptable
|-
|-
Line 93: Line 105:
| Major disruption in the data centre. Fire, flood  or electric failure for example
| Major disruption in the data centre. Fire, flood  or electric failure for example
| All the service components
| All the service components
| The computing centre has electric backup system and fire control devices. In case of an occurrence despite the controls, the virtual machine can be instantiated elsewhere.
| Backup of the MySQL and Configuration databases
| style="background: green"| Low
| style="background: yellow"| Medium
| 1-2 weeks, the time to deploy recover operational status at CESGA or the service to another resource centre partner of the NGI
| several weeks
| the measures already in place are considered satisfactory and risk level is acceptable
| the measures already in place are considered satisfactory and risk level is acceptable
|-
|-
Line 101: Line 113:
| Major security incident. The system is compromised by external attackers and needs to be reinstalled and restored.
| Major security incident. The system is compromised by external attackers and needs to be reinstalled and restored.
| All the service components
| All the service components
| Daily backup are executed. Backup is stored in a separate system and can be restored in another VM
| Backup of the MySQL and Configuration databases
| style="background: green"| Low  
| style="background: green"| Low  
| 1-2 work hours. In case new host certificates are required, up to 2 days.
| 1 or more working days
| the measures already in place are considered satisfactory and risk level is acceptable
| the measures already in place are considered satisfactory and risk level is acceptable
|-
|-
Line 109: Line 121:
| (D)DOS attack. The service is unavailable because of a coordinated DDOS.
| (D)DOS attack. The service is unavailable because of a coordinated DDOS.
| All the service components
| All the service components
| NREN provides protection for DOS attacks, firewall can limit impact of the DDoS
| Limited service queries queues avoiding dangerous overloading of the service components. Automatic service restart after going down due to an overload. 
| style="background: green"| Low  
| style="background: green"| Low  
| Depending on the attack, few hours maximum
| 1 hour
| the measures already in place are considered satisfactory and risk level is acceptable
| the measures already in place are considered satisfactory and risk level is acceptable
|}
|}
Line 118: Line 130:


The level of all the identified risks is acceptable and the countermeasures already adopted are considered satisfactory
The level of all the identified risks is acceptable and the countermeasures already adopted are considered satisfactory
== Additional Information ==
*There aren't special procedures to invoke in case of risk occurrence but the [https://dirac.readthedocs.io/en/latest/AdministratorGuide/index.html general administrator guide] and generic internal procedures
*the Availability targets don't change in case the plan is invoked.
*recovery requirements:
**Maximum tolerable period of disruption (MTPoD) (the maximum amount of time that a service can be unavailable or undelivered after an event that causes disruption to operations, before its stakeholders perceive unacceptable consequences): 5 day
**Recovery time objective (RTO) (the acceptable amount of time to restore the service in order to avoid unacceptable consequences associated with a break in continuity (this has to be less than MTPoD)): 5 days
**Recovery point objective (RPO) (the acceptable latency of data that will not be recovered): n/a
*approach for the return to normal working conditions as reported in the risk assessment.
*The dedicated GGUS Support Unit will be used to report any incident or service request.
*The providers can contact EGI Operations via ticket or email in case the continuity plan is invoked, or to discuss any change to it.


= Availability and Continuity test =
= Availability and Continuity test =
The proposed A/C test will focus on a recovery scenario: the service is supposed to have been disrupted and needs to be reinstalled from scratch. Typically this covers the risks 1,2, and 7.
The proposed A/C test will focus on a recovery scenario: the service is supposed to have been disrupted and needs to be reinstalled from scratch. Typically this covers the risks 1,2, and 7.
The last backup of the data will be used for restoring the service, verifying how much information will be lost, and the time spent will be measured.
The last backup of the data will be used for restoring the service, verifying how much information will be lost, and the time spent will be measured.
Line 127: Line 149:


== Test details ==
== Test details ==
More details available on https://documents.egi.eu/document/3597
'''The recovery test was performed on April 2020 but it is still considered valid: there is no need to repeat it'''.


references in the [https://rt.egi.eu/rt/Ticket/Display.html?id=14308 RT ticket 14308]
{| class="wikitable sortable"
 
|-
10:45 CEST -> poweroff of the original machine (accounting-devel-next)
! Test case !! Simulation !! Recover time !! Actions !! Status
11:05 CEST -> Power on for the template (minimal) machine for recovery
|-
12:00 CEST -> Data recovery completed
| Service/Agent crash that can be caused by some transaction failure, software error or human error || Kill the component process || few seconds || The component is restarted automatically by the system monitoring facility || PASS
12:10 CEST -> Boot restoration and miscelaneus comprobations.
|-
12:15 CEST -> Recovered machine rebooted and services on line
| Host failure, for example due to a power cut || reboot dirac4.grid.cyfronet.pl server || few minutes || The host rebooting sequence contains an automatic restart of all configured DIRAC components by using supervisord. || PASS
 
|-
The recovery took about '''70 minutes'''.
| Installed software corruption || reinstall DIRAC software stack from scratch || 10 - 15 minutes || Manual intervention: running dirac-install installer tool; verify that all the components properly restart. || PASS
|-
| Configuration files loss or corruption, for example, due to a hard disk failure. || BackUps of the local configuration files in a database or on another server || few minutes || Replace the lost configuration with a backup copy. || PASS
|-
| DB corruption and/or crash || recover from dump || 5 - 30 minutes || Manual intervention by the IN2P3-CC database service administrators || PASS
|}


== Test outcome ==
== Test outcome ==
The test can be considered successful: the service can be restored in few time and there are no loss of data.
The test can be considered successful: the service can be restored in few time if hardware, software or database failures occur.
Even if the service is not available for few hours or a day, this can be considered acceptable: the portal is used for displaying and collecting accounting information, the other infrastructure services are not depending on it. Only infrastructure/operations centres/resource centres/VOs managers would suffer of a temporary disruption of the service.


= Revision History  =
= Revision History  =
Line 153: Line 181:
| <br>
| <br>
| Alessandro Paolini
| Alessandro Paolini
| 2018-05-03
| 2019-01-10
| first draft, discussing with the provider
| first draft, discussing with the provider
|-
| <br>
| Alessandro Paolini
| 2019-08-27
| adding other availability requirements, and additional information for the risk assessment
|-
|-
|
|
| Ivan Diaz, Alessandro Paolini
| Alessandro Paolini
| 2018-05-28
| 2019-11-25
| added a paragraph about HA configuration; added information about the test
| page updated with additional availability requirements, and additional information section. Waiting for the recovery test, hopefully to be done in January.
|-
|-
| <br>
|
| Alessandro Paolini
| Alessandro Paolini
| 2018-08-15
| 2020-04-14
| added the paragraph Test Outcome, plan finalised
| added details about the recovery test provided by the supplier. Plan finalised.
|-
|-
|
|
|  
| Alessandro Paolini
|  
| 2021-05-11, 2021-05-31
|  
| starting the yearly review. (https://ggus.eu/index.php?mode=ticket_info&ticket_id=151951). Minor changes, review completed.
|}
|}

Latest revision as of 15:58, 28 January 2022

Main EGI.eu operations services Support Documentation Tools Activities Performance Technology Catch-all Services Resource Allocation Security


Documentation menu: Home Manuals Procedures Training Other Contact For: VO managers Administrators


Alert.png This page is Deprecated; the content has been moved to https://confluence.egi.eu/display/SUPWORKMAN/EGI+Workload+Manager+Availability+and+Continuity+plan

Back to main page: Services Availability Continuity Plans

Introduction

This page reports on the Availability and Continuity Plan for EGI Workload Manager - DIRAC4EGI and it is the result of the risks assessment conducted for this service: a series of risks and treats has been identified and analysed, along with the correspondent countermeasures currently in place. Whenever a countermeasure is not considered satisfactory for either avoiding or reducing the likelihood of the occurrence of a risk, or its impact, it is agreed with the service provider a new treatment for improving the availability and continuity of the service. The process is concluded with an availability and continuity test.

Last Next
Risks assessment 2021-05-28 2022 June
Av/Co plan and test 2021-05-31 2022 June

Previous plans are collected here: https://documents.egi.eu/document/3597

Availability requirements and performances

In the OLA it was agreed the following performances targets, on a monthly basis:

  • Availability: 99%
  • Reliability 99%

Other availability requirements: - the service is accessible through either X509 certificate or OAuth2 IdP (upcoming with the new release) - The service is accessible via CLI and webUI (link to the probes that checks the authentication and the service in general)

The service availability is regularly tested by nagios probe org.nagiosexchange.Portal-WebCheck: https://argo-mon.egi.eu/nagios/cgi-bin/status.cgi?host=dirac.egi.eu

The performances reports in terms of Availability and Reliability are produced by ARGO on an almost real time basis and they are also periodically collected into the Documentation Database.

Over the past years, the Workload Manager hadn't particular Av/Co issues highlighted by the performances that need to be further investigated.

Risks assessment and management

For more details, please look at the google spreadsheet. We will report here a summary of the assessment.

Risks analysis

Risk id Risk description Affected components Established measures Risk level Expected duration of downtime / time for recovery Comment
1 Service unavailable / loss of data due to hardware failure All the service components Database protection with regular backups. Redundant Configuration database. Snapshots of virtual machines hosting DIRAC4EGI services Medium 1 working day after the hardware failure recovery the measures already in place are considered satisfactory and risk level is acceptable
2 Service unavailable / loss of data due to software failure All the service components Backup of the MySQL and Configuration databases Medium 1 working day the measures already in place are considered satisfactory and risk level is acceptable
3 service unavailable / loss of data due to human error All the service components Backup of the MySQL and Configuration databases Medium 1 working day the measures already in place are considered satisfactory and risk level is acceptable
4 service unavailable for network failure (Network outage with causes external of the site) All the service components Geographically distributed redundant Configuration Service. Redundant failover Request Management Service. Low 1 hour after the network recovery the measures already in place are considered satisfactory and risk level is acceptable
5 Unavailability of key technical and support staff (holidays period, sickness, ...) Resources management. User support. Security infrastructure components Automation of synchronization with BDII, VOMS, GocDB information indices. Automated resource monitoring service Low 1 or more working days the measures already in place are considered satisfactory and risk level is acceptable
6 Major disruption in the data centre. Fire, flood or electric failure for example All the service components Backup of the MySQL and Configuration databases Medium several weeks the measures already in place are considered satisfactory and risk level is acceptable
7 Major security incident. The system is compromised by external attackers and needs to be reinstalled and restored. All the service components Backup of the MySQL and Configuration databases Low 1 or more working days the measures already in place are considered satisfactory and risk level is acceptable
8 (D)DOS attack. The service is unavailable because of a coordinated DDOS. All the service components Limited service queries queues avoiding dangerous overloading of the service components. Automatic service restart after going down due to an overload. Low 1 hour the measures already in place are considered satisfactory and risk level is acceptable

Outcome

The level of all the identified risks is acceptable and the countermeasures already adopted are considered satisfactory

Additional Information

  • There aren't special procedures to invoke in case of risk occurrence but the general administrator guide and generic internal procedures
  • the Availability targets don't change in case the plan is invoked.
  • recovery requirements:
    • Maximum tolerable period of disruption (MTPoD) (the maximum amount of time that a service can be unavailable or undelivered after an event that causes disruption to operations, before its stakeholders perceive unacceptable consequences): 5 day
    • Recovery time objective (RTO) (the acceptable amount of time to restore the service in order to avoid unacceptable consequences associated with a break in continuity (this has to be less than MTPoD)): 5 days
    • Recovery point objective (RPO) (the acceptable latency of data that will not be recovered): n/a
  • approach for the return to normal working conditions as reported in the risk assessment.
  • The dedicated GGUS Support Unit will be used to report any incident or service request.
  • The providers can contact EGI Operations via ticket or email in case the continuity plan is invoked, or to discuss any change to it.

Availability and Continuity test

The proposed A/C test will focus on a recovery scenario: the service is supposed to have been disrupted and needs to be reinstalled from scratch. Typically this covers the risks 1,2, and 7. The last backup of the data will be used for restoring the service, verifying how much information will be lost, and the time spent will be measured.

Performing this test will be useful to spot any issue in the recovery procedures of the service.

Test details

More details available on https://documents.egi.eu/document/3597 The recovery test was performed on April 2020 but it is still considered valid: there is no need to repeat it.

Test case Simulation Recover time Actions Status
Service/Agent crash that can be caused by some transaction failure, software error or human error Kill the component process few seconds The component is restarted automatically by the system monitoring facility PASS
Host failure, for example due to a power cut reboot dirac4.grid.cyfronet.pl server few minutes The host rebooting sequence contains an automatic restart of all configured DIRAC components by using supervisord. PASS
Installed software corruption reinstall DIRAC software stack from scratch 10 - 15 minutes Manual intervention: running dirac-install installer tool; verify that all the components properly restart. PASS
Configuration files loss or corruption, for example, due to a hard disk failure. BackUps of the local configuration files in a database or on another server few minutes Replace the lost configuration with a backup copy. PASS
DB corruption and/or crash recover from dump 5 - 30 minutes Manual intervention by the IN2P3-CC database service administrators PASS

Test outcome

The test can be considered successful: the service can be restored in few time if hardware, software or database failures occur.

Revision History

Version Authors Date Comments

Alessandro Paolini 2019-01-10 first draft, discussing with the provider

Alessandro Paolini 2019-08-27 adding other availability requirements, and additional information for the risk assessment
Alessandro Paolini 2019-11-25 page updated with additional availability requirements, and additional information section. Waiting for the recovery test, hopefully to be done in January.
Alessandro Paolini 2020-04-14 added details about the recovery test provided by the supplier. Plan finalised.
Alessandro Paolini 2021-05-11, 2021-05-31 starting the yearly review. (https://ggus.eu/index.php?mode=ticket_info&ticket_id=151951). Minor changes, review completed.