Alert.png The wiki is deprecated and due to be decommissioned by the end of September 2022.
The content is being migrated to other supports, new updates will be ignored and lost.
If needed you can get in touch with EGI SDIS team using operations @ egi.eu.

Difference between revisions of "Service for AAI: PERUN Availability and Continuity plan"

From EGIWiki
Jump to navigation Jump to search
Line 49: Line 49:
| 1
| 1
| Service unavailable / loss of data due to hardware failure
| Service unavailable / loss of data due to hardware failure
| database, GUI, provisioning engine
| All
| All components and host machines are regularly backed-up on OS and application level. Host is running in redundant virtual environment.
| All components and host machines are regularly backed-up on OS and application level. Host is running in redundant virtual environment.
| style="background: green"| Low  
| style="background: green"| Low  
Line 57: Line 57:
| 2
| 2
| Service unavailable / loss of data due to software failure
| Service unavailable / loss of data due to software failure
| database, GUI, provisioning engine
| Load balancer, IdP/SP proxy, Master Portal
| All components and host machines are regularly backed-up on OS and application level.
| All components and host machines are regularly backed-up on OS and application level.
| style="background: green"| Low  
| style="background: green"| Low  
Line 65: Line 65:
| 3
| 3
| service unavailable / loss of data due to human error
| service unavailable / loss of data due to human error
| database, GUI, provisioning engine, LDAP
| Load balancer, IdP/SP proxy, Master Portal
| All components and host machines are regularly backed-up on OS and application level. Only trained personel has an access to the machines.
| All components and host machines are regularly backed-up on OS and application level. Only trained personel has an access to the machines.
| style="background: yellow"| Medium  
| style="background: yellow"| Medium  
Line 73: Line 73:
| 4
| 4
| service unavailable for network failure (Network outage with causes external of the site)
| service unavailable for network failure (Network outage with causes external of the site)
| GUI, LDAP
| Load balancer, IdP/SP proxy, Master Portal
| Data centers has redundant network connections and LDAP servers are located in two different data centers.
| Data centers has redundant network connections and LDAP servers are located in two different data centers.
| style="background: yellow"| Medium
| style="background: yellow"| Medium
Line 81: Line 81:
| 5
| 5
| Unavailability of key technical and support staff (holidays period, sickness, ...)
| Unavailability of key technical and support staff (holidays period, sickness, ...)
| database, GUI, provisioning engine, LDAP
| Load balancer, IdP/SP proxy, Master Portal
| At least one person from the core team (5 team members) is still available.
| At least one person from the core team (5 team members) is still available.
| style="background: green"| Low  
| style="background: green"| Low  
Line 89: Line 89:
| 6
| 6
| Major disruption in the data centre. Fire, flood  or electric failure for example
| Major disruption in the data centre. Fire, flood  or electric failure for example
| database, GUI, provisioning engine
| Load balancer, IdP/SP proxy, Master Portal
| All components and host machines are regularly backed-up on OS and application level. Machines are located in well maintained and secured data centres. LDAP is located in two different data centres.
| All components and host machines are regularly backed-up on OS and application level. Machines are located in well maintained and secured data centres. LDAP is located in two different data centres.
| style="background: green"| Low  
| style="background: green"| Low  
Line 97: Line 97:
| 7
| 7
| Major security incident. The system is compromised by external attackers and needs to be reinstalled and restored.
| Major security incident. The system is compromised by external attackers and needs to be reinstalled and restored.
| database, GUI, engine, LDAP
| All
| All the components has network security monitoring in place and is under CSIRT team supervision. All the components are backed-up in backup system which cannot be accessible from host machine. Perun software and its components are regularly checked by penetration testing.
| All the components has network security monitoring in place and is under CSIRT team supervision. All the components are backed-up in backup system which cannot be accessible from host machine. Perun software and its components are regularly checked by penetration testing.
| style="background: green"| Low  
| style="background: green"| Low  
Line 105: Line 105:
| 8
| 8
| (D)DOS attack. The service is unavailable because of a coordinated DDOS.
| (D)DOS attack. The service is unavailable because of a coordinated DDOS.
| GUI, LDAP
| Load balancer, IdP/SP proxy, Master Portal
| Network monitoring is in place and active network elemetns deployed at CESNET network are able to lower the risk of DDOS attacks.
| Network monitoring is in place and active network elemetns deployed at CESNET network are able to lower the risk of DDOS attacks.
| style="background: yellow"| Medium  
| style="background: yellow"| Medium  

Revision as of 18:20, 9 November 2018

Main EGI.eu operations services Support Documentation Tools Activities Performance Technology Catch-all Services Resource Allocation Security


Documentation menu: Home Manuals Procedures Training Other Contact For: VO managers Administrators


Back to main page: Services Availability Continuity Plans

Introduction

This page reports on the Availability and Continuity Plan for Perun and it is the result of the risk assessment conducted for this service: a series of risks and threats has been identified and analysed, along with the correspondent countermeasures currently in place. Whenever a countermeasure is not considered satisfactory for either avoiding or reducing the likelihood of the occurrence of a risk, or its impact, it is agreed with the service provider a new treatment for improving the availability and continuity of the service. The process is concluded with an availability and continuity test.

Last Next
Risk assessment 2018-11-05
Av/Co plan and test

Performances

The performances reports in terms of Availability and Reliability are produced by ARGO on an almost real time basis and they are also periodically collected into the Documentation Database. In the OLA it was agreed the following performances targets, on a monthly basis:

  • Availability: 99%
  • Reliability 99%

Over the past years, PERUN hadn't particular Av/Co issues highlighted by the performances that need to be further investigated.

Risks assessment and management

For more details, please look at the google spreadsheet. We will report here a summary of the assessment.

Risks analysis

Risk id Risk description Affected components Established measures Risk level Treatment Expected duration of downtime / time for recovery
1 Service unavailable / loss of data due to hardware failure All All components and host machines are regularly backed-up on OS and application level. Host is running in redundant virtual environment. Low Restore service from the backups on the new hosts which are not affected by the failure. up to 8 hours
2 Service unavailable / loss of data due to software failure Load balancer, IdP/SP proxy, Master Portal All components and host machines are regularly backed-up on OS and application level. Low Restore service from the backups, log data analysis in order to find a reason of the software failure. up to 8 hours
3 service unavailable / loss of data due to human error Load balancer, IdP/SP proxy, Master Portal All components and host machines are regularly backed-up on OS and application level. Only trained personel has an access to the machines. Medium Restore service/data from backups. Revise actions done which led to the failure and modification of the processes in order to not allow the same error again. up to 4 hours
4 service unavailable for network failure (Network outage with causes external of the site) Load balancer, IdP/SP proxy, Master Portal Data centers has redundant network connections and LDAP servers are located in two different data centers. Medium Standard mitigation and recovery procedures made by network operators. up to 4 hours
5 Unavailability of key technical and support staff (holidays period, sickness, ...) Load balancer, IdP/SP proxy, Master Portal At least one person from the core team (5 team members) is still available. Low At least one person from the core team (5 team members) is available. 1 or more working days
6 Major disruption in the data centre. Fire, flood or electric failure for example Load balancer, IdP/SP proxy, Master Portal All components and host machines are regularly backed-up on OS and application level. Machines are located in well maintained and secured data centres. LDAP is located in two different data centres. Low Restore service from the backups on the new hosts which are not affected by the failure. up to 8 hours
7 Major security incident. The system is compromised by external attackers and needs to be reinstalled and restored. All All the components has network security monitoring in place and is under CSIRT team supervision. All the components are backed-up in backup system which cannot be accessible from host machine. Perun software and its components are regularly checked by penetration testing. Low Restore service from backups on a new hosts. up to 8 hours
8 (D)DOS attack. The service is unavailable because of a coordinated DDOS. Load balancer, IdP/SP proxy, Master Portal Network monitoring is in place and active network elemetns deployed at CESNET network are able to lower the risk of DDOS attacks. Medium Utilization of DDOS protection on the network level provided by CESNET. less than 1 hour

Outcome

The level of all the identified risks is acceptable and the countermeasures already adopted are considered satisfactory

Availability and Continuity test

The proposed A/C test will focus on a recovery scenario: the service has been disrupted and needs to be reinstalled from scratch. The time spent for restoring the service will be measured, using the last backup of the data stored in it and evaluating any eventual loss of information Performing this test will be useful to spot any issue in the recovery procedures of the service.

The test showed that the recovery takes only few minutes:

  • It takes 3.75 minutes to copy snapshot of the production system
  • 0.5 minutes takes machine boot
  • Change of the IP address 0,5 minutes

Moreover there are also full backups, so a machine can be recovered also in a completely different location, which will take a little bit longer because the need of doing the clean installation of the machine, software and data recovery. And as a last point DNS modification.

The test can be considered successful: even if it takes longer to restore the service, the other components of the infrastructure can use the last data available before the downtime; managing users and groups is not an hourly or daily activity, it is performed when needed, so even if the service is unavailable for few hours or a day, this can be acceptable.

Revision History

Version Authors Date Comments

Alessandro Paolini 2018-05-07 first draft, discussing with the provider

Alessandro Paolini 2018-08-02 Plan finalised