Alert.png The wiki is deprecated and due to be decommissioned by the end of September 2022.
The content is being migrated to other supports, new updates will be ignored and lost.
If needed you can get in touch with EGI SDIS team using operations @ egi.eu.

Difference between revisions of "Configuration Database GOCDB Availability and Continuity Plan"

From EGIWiki
Jump to navigation Jump to search
(Created page with "{{Template:Op menubar}} {{Template:Doc_menubar}} {{TOC_right}} Category:Operations Back to main page: Services Availability Continuity Plans = Introduction = This page...")
 
 
(67 intermediate revisions by 3 users not shown)
Line 5: Line 5:
= Introduction =
= Introduction =


This page reports on the Availability and Continuity Plan for the '''[[GOCDBl]]''' and it is the result of the risks assessment conducted for this service: a series of risks and treats has been identified and analysed, along with the correspondent countermeasures currently in place. Whenever a countermeasure is not considered satisfactory for either avoiding or reducing the likelihood of the occurrence of a risk, or its impact, it is agreed with the service provider a new treatment for improving the availability and continuity of the service. The process is concluded with an availability and continuity test.
This page reports on the Availability and Continuity Plan for the '''[[GOCDB]]''' and it is the result of the risks assessment conducted for this service: a series of risks and treats has been identified and analysed, along with the correspondent countermeasures currently in place. Whenever a countermeasure is not considered satisfactory for either avoiding or reducing the likelihood of the occurrence of a risk, or its impact, it is agreed with the service provider a new treatment for improving the availability and continuity of the service. The process is concluded with an availability and continuity test.


{| class="wikitable"
{| class="wikitable"
Line 14: Line 14:
|-
|-
! scope="row"| Risks assessment
! scope="row"| Risks assessment
| 2018-05-03
| 2021-04-19
| 2019 May
| 2022 April
|-
|-
! scope="row"| Av/Co plan and test
! scope="row"| Av/Co plan
| 2018-05-28
| 2021-04-19
| 2019 May
| 2022 April
|-
|-
|}
|}


= Hardware HA Configuration =
Previous plans are collected here: https://documents.egi.eu/secure/ShowDocument?docid=3537


= Availability requirements and performance =
In the OLA it was agreed the following performances targets, on a monthly basis:
*Availability: 99%
*Reliability 99%


* The Accounting Portal service is available in a dedicated virtual machine running in the CESGA cloud framework based on OpenNebula software, which offers high availability thanks to its resources:
Other availability requirements:
** A pool of physical servers where the virtual machine can run. Over 50 servers with 24 cores and 32GB per server are available. These servers are configured with redundant power supply and two disks in RAID-1 configuration.
*the service is accessible through X509 certificate and EGI Check-in  
** Storage is provided in a NetApp HA storage solution, providing redundant configuration for data movers (servers) and RAID-TEC (triple parity) protection for the disks; the backup of this storage is performed on a daily basis
*the service is accessible via webUI and API


= Performances =
The service availability is regularly tested by nagios probes (eu.egi.CertValidity, org.nagios.GOCDB-PortCheck, org.nagiosexchange.GOCDB-PI, org.nagiosexchange.GOCDB-WebCheck): https://argo-mon.egi.eu/nagios/cgi-bin/status.cgi?servicegroup=SERVICE_egi.GOCDB&style=detail


The performances reports in terms of Availability and Reliability are produced by [http://argo.egi.eu/ar-opsmon ARGO] on an almost real time basis and they are also collected every 6 months into the Documentation Database (last period [https://documents.egi.eu/document/3270 July 2017 - December 2017].
The performances reports in terms of Availability and Reliability are produced by [https://argo.egi.eu/egi/OPS-MONITOR-Critical ARGO] on an almost real time basis and they are also periodically collected into the [https://documents.egi.eu/public/ShowDocument?docid=2324 Documentation Database].
 
In the OLA it was agreed the following performances targets, on a monthly basis:
*Availability: 99%
*Reliability 99%


Over the past years, the Accounting Portal hadn't particular Av/Co issues highlighted by the performances that need to be further investigated.
Over the past years, GOCDB hadn't particular Av/Co issues highlighted by the performances that need to be further investigated.


= Risks assessment and management =
= Risks assessment and management =


For more details, please look at the [https://docs.google.com/spreadsheets/d/1eww6AtARDnNEWerZgG0rjoAOwSpRLDQU-Ps02sr_pPs/edit#gid=1565906040 google spreadsheet]. We will report here a summary of the assessment.
For more details, please look at the [https://docs.google.com/spreadsheets/d/1eww6AtARDnNEWerZgG0rjoAOwSpRLDQU-Ps02sr_pPs/edit#gid=698217933 google spreadsheet]. We will report here a summary of the assessment.
 
== Risks analysis (to update) ==


== Risks analysis ==
{| class="wikitable"
{| class="wikitable"
! Risk id
! Risk id
Line 52: Line 51:
! Established measures
! Established measures
! Risk level
! Risk level
! Treatment
! Expected duration of downtime / time for recovery
! Expected duration of downtime / time for recovery
! Comment
|-
|-
| 1
| 1
| Service unavailable / loss of data due to hardware failure
| Service unavailable / loss of data due to hardware failure
| All
| Web portal and PI
| All services are running on virtual machines. In case of hardware failure of the host machine the virtual machine can be re-instantiated in another hypervisor in the private cloud. Daily backups of the service including database data.
| Reduce the likelihood: Webservers are in a highly availbale pair on seperate VMWare hypervisors.
Reduce the impact: GOCDB has a separate failover instance. In case of disruption to the primary webservers, the failover is automatically enagaed by the load balancers. In the case of prolonged disruption, the failover would be engaged by editing the DNS entry for goc.egi.eu.  
| style="background: green"| Low  
| style="background: green"| Low  
| In case an instance must be instantiated from backups can take up to two-three working hours.
| In the case of a prolonged outage of the production hardware (+2 days), the failover would be set to read write mode, allowing the data to be updated.
In case latest backups are not available some data must be re-generate from the central accounting repositories and this may take up to two hours.
| In the event of production hardware failure, the service would be restored incrementally
| the measures already in place are considered satisfactory and risk level is acceptable
*0.5 working days: a broadcast would be made providing the URL to the read only failover
*1 working day: The DNS entry for goc.egi.eu would be changed to point at the read only failover
*2 working days: The failover would be set to read write mode
|-
|-
| 2
| 2
| Service unavailable / loss of data due to software failure
| Service unavailable / loss of data due to software failure
| All
| Web portal and PI
| Restoring of the codebase via git repository
| Reduce the likelihood: Webservers are in a highly available pair on separate VMWare hypervisors.
Reduce the impact: GOCDB has a separate failover instance. In case of disruption to the primary webservers, the failover is automatically engaged by the load balancers. In the case of prolonged disruption, the failover would be engaged by editing the DNS entry for goc.egi.eu.
| style="background: green"| Low  
| style="background: green"| Low  
| One to two working hours
| In the case of a prolonged outage of the production hardware (+2 days), the failover would be set to read write mode, allowing the data to be updated.
| the measures already in place are considered satisfactory and risk level is acceptable
| In the event of production hardware failure, the service would be restored incrementally
*0.5 working days: a broadcast would be made providing the URL to the read only failover
*1 working day: The DNS entry for goc.egi.eu would be changed to point at the read only failover
*2 working days: The failover would be set to read write mode
|-
|-
| 3
| 3
| service unavailable / loss of data due to human error
| service unavailable / loss of data due to human error
| All
| Web portal and PI
| Restoring of the codebase via git repository, restore of backup of virtual machine, restoring of data from SSM services.
| Reduce the likelihood: Webservers are in a highly available pair and under configuration management. Access to production systems is limited, service and procedures documented internally.
Reduce the impact: Webservers are configuration managed, making human error easier to roll back. GOCDB has a separate failover instance. In case of disruption to the primary webservers, the failover is automatically engaged by the load balancers. In the case of prolonged disruption, the failover would be engaged by editing the DNS entry for goc.egi.eu.
| style="background: green"| Low  
| style="background: green"| Low  
| Two to Three working hours.
| In the case of a prolonged outage of the production service (+2 days) due to human error, the failover would be set to read write mode, allowing the data to be updated.
| the measures already in place are considered satisfactory and risk level is acceptable
| In the event of production hardware failure, the service would be restored incrementally
*0.5 working days: a broadcast would be made providing the URL to the read only failover
*1 working day: The DNS entry for goc.egi.eu would be changed to point at the read only failover
*2 working days: The failover would be set to read write mode
|-
|-
| 4
| 4
| service unavailable for network failure (Network outage with causes external of the site)
| service unavailable / loss of data due to failure (for whatever reason) of service dependencies managed by other STFC teams (i.e Networking, VMWare, Load Balancer and Database)
| Web frontend
| Web portal and PI
| CESGA has redundant network connectivity to the NREN
| Reduce the likelihood: Load Balancers are in a highly available pair on separate VMWare hypervisors. Database is a highly available cluster.
Reduce the impact: Load Balancers are under configuration management, making some errors easier to roll back. GOCDB has a separate failover instance at a STFC data centre located at a different site. In the case of prolonged disruption, the failover would be engaged by editing the DNS entry for goc.egi.eu.
| style="background: green"| Low  
| style="background: green"| Low  
| Close to zero, less than one hour.
| In the case of a prolonged outage of the network (+2 days), the failover would be set to read write mode, allowing the data to be updated.
| the measures already in place are considered satisfactory and risk level is acceptable
| In the event of production hardware failure, the service would be restored incrementally
*0.5 working days: a broadcast would be made providing the URL to the read only failover
*1 working day: The DNS entry for goc.egi.eu would be changed to point at the read only failover
*2 working days: The failover would be set to read write mode
|-
|-
| 5
| 5
| Unavailability of key technical and support staff (holidays period, sickness, ...)
| Unavailability of key technical and support staff (holidays period, sickness, ...)
| All
| Web portal and PI
| More personnel have been involved in the operation of the Accounting portal, this ensures actions taken within the OLA goals every working day. There is also internal documentation with management procedures and portal architecture.
| Reduce the likelihood: GOCDB technical / support staff try not to be on leave at the same time for a prolonged period. A cover person, capable of handling tickets and investigating minor problems, is assigned in the case of short term absences or an 'At Risk' is declared.
| style="background: green"| Low  
| style="background: green"| Low  
| Within the OLA targets for operational actions
| Service would be restored on the return of technical / support staff
Longer periods in case of bugs or maintenance (one week) because not all the personnel can develop patches to the code.
| The length of the absence / declared downtime
| the measures already in place are considered satisfactory and risk level is acceptable
|-
|-
| 6
| 6
| Major disruption in the data centre. Fire, flood  or electric failure for example
| Major disruption in the data centre. Fire, flood  or electric failure for example
| All
| Web portal and PI, Database
| The computing centre has electric backup system and fire control devices. In case of an occurrence despite the controls, the virtual machine can be instantiated elsewhere.  
| Reduce the likelihood: The health of the data centre is proactively managed by a team of experts.
Reduce the impact: The GOCDB webservers, and all it's system dependencies (see risk 4) run on UPS, so should remain operational in the event of electrical failure. In the case of other, long lasting, data centre disruption, the failover would be engaged.
| style="background: green"| Low  
| style="background: green"| Low  
| 1-2 weeks, the time to deploy recover operational status at CESGA or the service to another resource centre partner of the NGI
| In the case of a prolonged disruption (+2 days) to the data centre affecting GOCDB, the failover would be set to read write mode, allowing the data to be updated.
| the measures already in place are considered satisfactory and risk level is acceptable
| In the event of production hardware failure, the service would be restored incrementally
*0.5 working days: a broadcast would be made providing the URL to the read only failover
*1 working day: The DNS entry for goc.egi.eu would be changed to point at the read only failover
*2 working days: The failover would be set to read write mode
|-
|-
| 7
| 7
| Major security incident. The system is compromised by external attackers and needs to be reinstalled and restored.
| Major security incident. The system is compromised by external attackers and needs to be reinstalled and restored.
| Frontend and DB
| Web portal and PI
| Daily backup are executed. Backup is stored in a separate system and can be restored in another VM
| Reduce the likelihood: Internal security audits of the GOCDB machines that (for example) ensure packages are up to date, only the necessary ports are open, vulnerability warnings are received, etc.
Reduce the impact: The failover would be engaged while security incidents are being investigated. Use of configuration management means reinstalling, restoring or deployment of a new instance of a GOCDB host to a known state can be done quickly.
| style="background: green"| Low  
| style="background: green"| Low  
| 1-2 work hours. In case new host certificates are required, up to 2 days.
| "In the case of a prolonged outage (+2 days) of the production GOCDB, the failover would be set to read write mode, allowing the data to be updated."
| the measures already in place are considered satisfactory and risk level is acceptable
| In the event of production hardware failure, the service would be restored incrementally
*0.5 working days: a broadcast would be made providing the URL to the read only failover
*1 working day: The DNS entry for goc.egi.eu would be changed to point at the read only failover
*2 working days: The failover would be set to read write mode
|-
|-
| 8
| 8
| (D)DOS attack. The service is unavailable because of a coordinated DDOS.
| (D)DOS attack. The service is unavailable because of a coordinated DDOS.
| Web interface
| Web portal and PI
| NREN provides protection for DOS attacks, firewall can limit impact of the DDoS
| Reduce the impact: The number of requests over time are monitored so any abnormal increase in traffic or response time would be noticed
| style="background: green"| Low  
| style="background: green"| Low  
| Depending on the attack, few hours maximum
| ability to block offending IPs
| the measures already in place are considered satisfactory and risk level is acceptable
| Up to 4 hours
|}
|}


== Outcome ==
== Outcome ==
The level of all the identified risks is acceptable and the countermeasures already adopted are considered satisfactory


The level of all the identified risks is acceptable and the countermeasures already adopted are considered satisfactory
== Additional information ==
*An internal recover procedure is available to the staff
*The Availability targets don't change in case the plan is invoked.
*Recovery requirements:
**Maximum tolerable period of disruption (MTPoD) (the maximum amount of time that a service can be unavailable or undelivered after an event that causes disruption to operations, before its stakeholders perceive unacceptable consequences): 2 days
**Recovery time objective (RTO) (the acceptable amount of time to restore the service in order to avoid unacceptable consequences associated with a break in continuity (this has to be less than MTPoD)): 1 day
**Recovery point objective (RPO) (the acceptable latency of data that will not be recovered): 2 days
*approach for the return to normal working conditions as reported in the risk assessment.


= Availability and Continuity test =
= Availability and Continuity test =
Given the off-site failover is engaged by editing DNS entries, and the potential for these to persist up to 24 hours in some case, it was agreed with the provider that testing a recovery scenario is not currently realistic.


The proposed A/C test will focus on a recovery scenario: the service is supposed to have been disrupted and needs to be reinstalled from scratch. Typically this covers the risks 1,2, and 7.
In the event that the service at RAL disappears for a prolonged period (1+ day), the DNS entry would be flipped to point at the failover in DL, whilst reinstalling locally if needed. The failover is configured as a “backup” server in the load balancers, allowing it to serve the production URL automatically in the case of disruption to the primary webservers. The failover at DL is accessible here: https://gocdb.hartree.stfc.ac.uk.
The last backup of the data will be used for restoring the service, verifying how much information will be lost, and the time spent will be measured.


Performing this test will be useful to spot any issue in the recovery procedures of the service.
A pre-production instance is deployed at https://gocdb-preprod.egi.eu.


= Revision History  =
= Revision History  =
Line 147: Line 176:
| first draft, discussing with the provider
| first draft, discussing with the provider
|-
|-
|
| <br>
|  
| Alessandro Paolini
|  
| 2018-11-02
|
| plan finalised
|-
| <br>
| Alessandro Paolini
| 2019-11-19
| starting the yearly review....
|-
| <br>
| Alessandro Paolini
| 2020-02-26
| review completed
|-
| <br>
| Alessandro Paolini, Greg Corbett
| 2021-03-30, 2021-04-19
| yearly review; updated risk assessment section; updated Availability and Continuity test section
|-
|-
|
|

Latest revision as of 14:56, 19 April 2021

Main EGI.eu operations services Support Documentation Tools Activities Performance Technology Catch-all Services Resource Allocation Security


Documentation menu: Home Manuals Procedures Training Other Contact For: VO managers Administrators


Back to main page: Services Availability Continuity Plans

Introduction

This page reports on the Availability and Continuity Plan for the GOCDB and it is the result of the risks assessment conducted for this service: a series of risks and treats has been identified and analysed, along with the correspondent countermeasures currently in place. Whenever a countermeasure is not considered satisfactory for either avoiding or reducing the likelihood of the occurrence of a risk, or its impact, it is agreed with the service provider a new treatment for improving the availability and continuity of the service. The process is concluded with an availability and continuity test.

Last Next
Risks assessment 2021-04-19 2022 April
Av/Co plan 2021-04-19 2022 April

Previous plans are collected here: https://documents.egi.eu/secure/ShowDocument?docid=3537

Availability requirements and performance

In the OLA it was agreed the following performances targets, on a monthly basis:

  • Availability: 99%
  • Reliability 99%

Other availability requirements:

  • the service is accessible through X509 certificate and EGI Check-in
  • the service is accessible via webUI and API

The service availability is regularly tested by nagios probes (eu.egi.CertValidity, org.nagios.GOCDB-PortCheck, org.nagiosexchange.GOCDB-PI, org.nagiosexchange.GOCDB-WebCheck): https://argo-mon.egi.eu/nagios/cgi-bin/status.cgi?servicegroup=SERVICE_egi.GOCDB&style=detail

The performances reports in terms of Availability and Reliability are produced by ARGO on an almost real time basis and they are also periodically collected into the Documentation Database.

Over the past years, GOCDB hadn't particular Av/Co issues highlighted by the performances that need to be further investigated.

Risks assessment and management

For more details, please look at the google spreadsheet. We will report here a summary of the assessment.

Risks analysis

Risk id Risk description Affected components Established measures Risk level Treatment Expected duration of downtime / time for recovery
1 Service unavailable / loss of data due to hardware failure Web portal and PI Reduce the likelihood: Webservers are in a highly availbale pair on seperate VMWare hypervisors.

Reduce the impact: GOCDB has a separate failover instance. In case of disruption to the primary webservers, the failover is automatically enagaed by the load balancers. In the case of prolonged disruption, the failover would be engaged by editing the DNS entry for goc.egi.eu.

Low In the case of a prolonged outage of the production hardware (+2 days), the failover would be set to read write mode, allowing the data to be updated. In the event of production hardware failure, the service would be restored incrementally
  • 0.5 working days: a broadcast would be made providing the URL to the read only failover
  • 1 working day: The DNS entry for goc.egi.eu would be changed to point at the read only failover
  • 2 working days: The failover would be set to read write mode
2 Service unavailable / loss of data due to software failure Web portal and PI Reduce the likelihood: Webservers are in a highly available pair on separate VMWare hypervisors.

Reduce the impact: GOCDB has a separate failover instance. In case of disruption to the primary webservers, the failover is automatically engaged by the load balancers. In the case of prolonged disruption, the failover would be engaged by editing the DNS entry for goc.egi.eu.

Low In the case of a prolonged outage of the production hardware (+2 days), the failover would be set to read write mode, allowing the data to be updated. In the event of production hardware failure, the service would be restored incrementally
  • 0.5 working days: a broadcast would be made providing the URL to the read only failover
  • 1 working day: The DNS entry for goc.egi.eu would be changed to point at the read only failover
  • 2 working days: The failover would be set to read write mode
3 service unavailable / loss of data due to human error Web portal and PI Reduce the likelihood: Webservers are in a highly available pair and under configuration management. Access to production systems is limited, service and procedures documented internally.

Reduce the impact: Webservers are configuration managed, making human error easier to roll back. GOCDB has a separate failover instance. In case of disruption to the primary webservers, the failover is automatically engaged by the load balancers. In the case of prolonged disruption, the failover would be engaged by editing the DNS entry for goc.egi.eu.

Low In the case of a prolonged outage of the production service (+2 days) due to human error, the failover would be set to read write mode, allowing the data to be updated. In the event of production hardware failure, the service would be restored incrementally
  • 0.5 working days: a broadcast would be made providing the URL to the read only failover
  • 1 working day: The DNS entry for goc.egi.eu would be changed to point at the read only failover
  • 2 working days: The failover would be set to read write mode
4 service unavailable / loss of data due to failure (for whatever reason) of service dependencies managed by other STFC teams (i.e Networking, VMWare, Load Balancer and Database) Web portal and PI Reduce the likelihood: Load Balancers are in a highly available pair on separate VMWare hypervisors. Database is a highly available cluster.

Reduce the impact: Load Balancers are under configuration management, making some errors easier to roll back. GOCDB has a separate failover instance at a STFC data centre located at a different site. In the case of prolonged disruption, the failover would be engaged by editing the DNS entry for goc.egi.eu.

Low In the case of a prolonged outage of the network (+2 days), the failover would be set to read write mode, allowing the data to be updated. In the event of production hardware failure, the service would be restored incrementally
  • 0.5 working days: a broadcast would be made providing the URL to the read only failover
  • 1 working day: The DNS entry for goc.egi.eu would be changed to point at the read only failover
  • 2 working days: The failover would be set to read write mode
5 Unavailability of key technical and support staff (holidays period, sickness, ...) Web portal and PI Reduce the likelihood: GOCDB technical / support staff try not to be on leave at the same time for a prolonged period. A cover person, capable of handling tickets and investigating minor problems, is assigned in the case of short term absences or an 'At Risk' is declared. Low Service would be restored on the return of technical / support staff The length of the absence / declared downtime
6 Major disruption in the data centre. Fire, flood or electric failure for example Web portal and PI, Database Reduce the likelihood: The health of the data centre is proactively managed by a team of experts.

Reduce the impact: The GOCDB webservers, and all it's system dependencies (see risk 4) run on UPS, so should remain operational in the event of electrical failure. In the case of other, long lasting, data centre disruption, the failover would be engaged.

Low In the case of a prolonged disruption (+2 days) to the data centre affecting GOCDB, the failover would be set to read write mode, allowing the data to be updated. In the event of production hardware failure, the service would be restored incrementally
  • 0.5 working days: a broadcast would be made providing the URL to the read only failover
  • 1 working day: The DNS entry for goc.egi.eu would be changed to point at the read only failover
  • 2 working days: The failover would be set to read write mode
7 Major security incident. The system is compromised by external attackers and needs to be reinstalled and restored. Web portal and PI Reduce the likelihood: Internal security audits of the GOCDB machines that (for example) ensure packages are up to date, only the necessary ports are open, vulnerability warnings are received, etc.

Reduce the impact: The failover would be engaged while security incidents are being investigated. Use of configuration management means reinstalling, restoring or deployment of a new instance of a GOCDB host to a known state can be done quickly.

Low "In the case of a prolonged outage (+2 days) of the production GOCDB, the failover would be set to read write mode, allowing the data to be updated." In the event of production hardware failure, the service would be restored incrementally
  • 0.5 working days: a broadcast would be made providing the URL to the read only failover
  • 1 working day: The DNS entry for goc.egi.eu would be changed to point at the read only failover
  • 2 working days: The failover would be set to read write mode
8 (D)DOS attack. The service is unavailable because of a coordinated DDOS. Web portal and PI Reduce the impact: The number of requests over time are monitored so any abnormal increase in traffic or response time would be noticed Low ability to block offending IPs Up to 4 hours

Outcome

The level of all the identified risks is acceptable and the countermeasures already adopted are considered satisfactory

Additional information

  • An internal recover procedure is available to the staff
  • The Availability targets don't change in case the plan is invoked.
  • Recovery requirements:
    • Maximum tolerable period of disruption (MTPoD) (the maximum amount of time that a service can be unavailable or undelivered after an event that causes disruption to operations, before its stakeholders perceive unacceptable consequences): 2 days
    • Recovery time objective (RTO) (the acceptable amount of time to restore the service in order to avoid unacceptable consequences associated with a break in continuity (this has to be less than MTPoD)): 1 day
    • Recovery point objective (RPO) (the acceptable latency of data that will not be recovered): 2 days
  • approach for the return to normal working conditions as reported in the risk assessment.

Availability and Continuity test

Given the off-site failover is engaged by editing DNS entries, and the potential for these to persist up to 24 hours in some case, it was agreed with the provider that testing a recovery scenario is not currently realistic.

In the event that the service at RAL disappears for a prolonged period (1+ day), the DNS entry would be flipped to point at the failover in DL, whilst reinstalling locally if needed. The failover is configured as a “backup” server in the load balancers, allowing it to serve the production URL automatically in the case of disruption to the primary webservers. The failover at DL is accessible here: https://gocdb.hartree.stfc.ac.uk.

A pre-production instance is deployed at https://gocdb-preprod.egi.eu.

Revision History

Version Authors Date Comments

Alessandro Paolini 2018-08-10 first draft, discussing with the provider

Alessandro Paolini 2018-11-02 plan finalised

Alessandro Paolini 2019-11-19 starting the yearly review....

Alessandro Paolini 2020-02-26 review completed

Alessandro Paolini, Greg Corbett 2021-03-30, 2021-04-19 yearly review; updated risk assessment section; updated Availability and Continuity test section