Alert.png The wiki is deprecated and due to be decommissioned by the end of September 2022.
The content is being migrated to other supports, new updates will be ignored and lost.
If needed you can get in touch with EGI SDIS team using operations @ egi.eu.

EGI Notebooks Availability and Continuity Plan

From EGIWiki
Jump to navigation Jump to search
Main EGI.eu operations services Support Documentation Tools Activities Performance Technology Catch-all Services Resource Allocation Security


Documentation menu: Home Manuals Procedures Training Other Contact For: VO managers Administrators


Back to main page: Services Availability Continuity Plans

Introduction

This page reports on the Availability and Continuity Plan for EGI Notebooks and it is the result of the risks assessment conducted for this service: a series of risks and treats has been identified and analysed, along with the correspondent countermeasures currently in place. Whenever a countermeasure is not considered satisfactory for either avoiding or reducing the likelihood of the occurrence of a risk, or its impact, it is agreed with the service provider a new treatment for improving the availability and continuity of the service. The process is concluded with an availability and continuity test.

Last Next
Risks assessment 2019-05-28 2020-05-28
Av/Co plan and test 2019-05-28 2020-05-28

Performances

The performances reports in terms of Availability and Reliability are produced by ARGO on an almost real time basis and they are also periodically collected into the Documentation Database.

In the OLA it was agreed the following performances targets, on a monthly basis:

  • Availability: 90%
  • Reliability 90%

So far Notebooks didn't have any particular Av/Co issues highlighted by the performances that need to be further investigated.

Risks assessment and management

For more details, please look at the google spreadsheet. We will report here a summary of the assessment.

Risks analysis

Risk id Risk description Affected components Established measures Risk level Expected duration of downtime / time for recovery Comment
1 Service unavailable / loss of data due to hardware failure All the service components Service configuration and deployment on Kubernetes and managed as code in git repositories.

(Daily) Backup of user storage

Medium up to 1 day the measures already in place are considered satisfactory and risk level is acceptable
2 Service unavailable / loss of data due to software failure All the service components Service configuration and deployment on Kubernetes and managed as code in git repositories.

(Daily) Backup of user storage

Low up to 1 day the measures already in place are considered satisfactory and risk level is acceptable
3 service unavailable / loss of data due to human error All the service components Service configuration and deployment on Kubernetes and managed as code in git repositories.

(Daily) Backup of user storage

Medium up to 1 day the measures already in place are considered satisfactory and risk level is acceptable
4 service unavailable for network failure (Network outage with causes external of the site) All the service components Service configuration and deployment on Kubernetes and managed as code in git repositories.

(Daily) Backup of user storage

Low up to 1 day the measures already in place are considered satisfactory and risk level is acceptable
5 Unavailability of key technical and support staff (holidays period, sickness, ...) All the service components None High 1 or more working days Documentation and training needed to increase the staff capable of managing the service.
6 Major disruption in the data centre. Fire, flood or electric failure for example All the service components Service configuration and deployment on Kubernetes and managed as code in git repositories.

(Daily) Backup of user storage

Low up to 1 working day the measures already in place are considered satisfactory and risk level is acceptable
7 Major security incident. The system is compromised by external attackers and needs to be reinstalled and restored. All the service components Service configuration and deployment on Kubernetes and managed as code in git repositories.

(Daily) Backup of user storage

Low up to 1 working day the measures already in place are considered satisfactory and risk level is acceptable
8 (D)DOS attack. The service is unavailable because of a coordinated DDOS. All the service components Service configuration and deployment on Kubernetes and managed as code in git repositories.

(Daily) Backup of user storage

Low up to 1 working day the measures already in place are considered satisfactory and risk level is acceptable

Outcome

The level of all the identified risks is acceptable and the countermeasures already adopted are considered satisfactory

Availability and Continuity test

Test details

The proposed A/C test checks if the recovery from a disruption can be performed by installing from scratch all the service. The last user data will be backed up and time spent will be measured. Performing this test will be useful to spot any issue in the recovery procedures of the service.

Test steps:

  1. Deploy kubernetes on a set of Virtual Machines, get one public IP for the ingress node and record that public IP
  2. Register a new domain name "recover-notebooks.test.fedcloud.eu" in https://nsupdate.egi.eu pointing to the public IP of the k8s ingress
  3. Register a new client in dev instance of EGI Check-in
  4. Deploy the notebooks on the kubernetes cluster configuring the Check-in clients credentials obtained in the previous step and using "recover-notebooks.test.fedcloud.eu" as host name for the ingress.
  5. Clone the EGI-Notebooks-backup repo
  6. Create the secrets as detailed in the repo
  7. Create the rbac roles and launch the recovery the job
  8. Login with the user of the original notebooks.egi.eu and check that files are actually recovered

Test outcome

Installation of the Jupyterhub instance:

$ helm install -f hub.yaml --namespace catchall --version=0.8.2 --name hub jupyterhub/jupyterhub
NAME:   hub
LAST DEPLOYED: Thu Jun  6 16:35:44 2019
NAMESPACE: catchall
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME        DATA  AGE
hub-config  1     1s

==> v1/Deployment
NAME   READY  UP-TO-DATE  AVAILABLE  AGE
hub    0/1    1           0          0s
proxy  0/1    1           0          0s

==> v1/PersistentVolumeClaim
NAME        STATUS  VOLUME                                    CAPACITY  ACCESS MODES  STORAGECLASS         AGE
hub-db-dir  Bound   pvc-61c44c5c-8868-11e9-a52c-fa163ed00f15  1Gi       RWO           managed-nfs-storage  1s

==> v1/Pod(related)
NAME                    READY  STATUS             RESTARTS  AGE
hub-597c78b9fb-87kn4    0/1    ContainerCreating  0         0s
proxy-8474bf55cb-2xhgs  0/1    ContainerCreating  0         0s

==> v1/Role
NAME  AGE
hub   1s

==> v1/RoleBinding
NAME  AGE
hub   0s

==> v1/Secret
NAME        TYPE    DATA  AGE
hub-secret  Opaque  3     1s

==> v1/Service
NAME          TYPE       CLUSTER-IP      EXTERNAL-IP  PORT(S)                     AGE
hub           ClusterIP  10.103.124.120  <none>       8081/TCP                    0s
proxy-api     ClusterIP  10.99.7.137     <none>       8001/TCP                    0s
proxy-public  NodePort   10.97.236.34    <none>       80:30590/TCP,443:30739/TCP  0s

==> v1/ServiceAccount
NAME  SECRETS  AGE
hub   1        1s

==> v1/StatefulSet
NAME              READY  AGE
user-placeholder  0/0    0s

==> v1beta1/Ingress
NAME        HOSTS                               ADDRESS  PORTS  AGE
jupyterhub  recover-notebooks.test.fedcloud.eu  80       0s

==> v1beta1/PodDisruptionBudget
NAME              MIN AVAILABLE  MAX UNAVAILABLE  ALLOWED DISRUPTIONS  AGE
hub               1              N/A              0                    1s
proxy             1              N/A              0                    1s
user-placeholder  0              N/A              0                    1s
user-scheduler    1              N/A              0                    1s


NOTES:
Thank you for installing JupyterHub!

Your release is named hub and installed into the namespace catchall.

You can find if the hub and proxy is ready by doing:

 kubectl --namespace=catchall get pod

and watching for both those pods to be in status 'Ready'.

You can find the public IP of the JupyterHub by doing:

 kubectl --namespace=catchall get svc proxy-public

It might take a few minutes for it to appear!

Note that this is still an alpha release! If you have questions, feel free to
  1. Read the guide at https://z2jh.jupyter.org
  2. Chat with us at https://gitter.im/jupyterhub/jupyterhub
  3. File issues at https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues

Perform restore of backup

$ kubectl apply -f job.yaml 
job.batch/notebooks-backup-recover created
$ kubectl get pod
NAME                             READY   STATUS    RESTARTS   AGE
notebooks-backup-recover-2fntt   1/1     Running   0          6s
$ kubectl logs notebooks-backup-recover-2fntt
+ mkdir /backup
+ restic restore latest --target /backup
created new cache in /root/.cache/restic
restoring <Snapshot 87683f08 of [/exports] at 2019-06-06 14:10:56.913319998 +0000 UTC by root@onebackup-d2n5s> to /backup
+ python /usr/local/bin/recover.py --backup-path /backup/exports/ --namespace catchall /backup/exports/pvc
INFO:root:PVC: 00d584d784d9463b9545e726a290a512
INFO:root:Destination path: /exports/catchall-00d584d784d9463b9545e726a290a512-pvc-b9c1162b-8868-11e9-a52c-fa163ed00f15
INFO:root:Will restore storage of user dfadf63fcb2723480357cb8ff9f0570cda7d2872ca24e65bfe21f0154f238ce2 at /exports/catchall-00d584d784d9463b9545e726a290a512-pvc-b9c1162b-8868-11e9-a52c-fa163ed00f15 from /backup/exports/catchall-00d584d784d9463b9545e726a290a512-pvc-f94a7ab5-4ae1-11e9-85ab-fa163e6125a0
[...]
INFO:root:PVC: fe1c6c56e09d4ea3a4fa0328a43fa925
INFO:root:Destination path: /exports/catchall-fe1c6c56e09d4ea3a4fa0328a43fa925-pvc-f40b68a0-8869-11e9-a52c-fa163ed00f15
INFO:root:Will restore storage of user 025166931789a0f57793a6092726c2ad89387a4cc167e7c63c5d85fc91021d18 at /exports/catchall-fe1c6c56e09d4ea3a4fa0328a43fa925-pvc-f40b68a0-8869-11e9-a52c-fa163ed00f15 from /backup/exports/catchall-fe1c6c56e09d4ea3a4fa0328a43fa925-pvc-b25fddf7-ee78-11e8-8d67-fa163e6125a0
INFO:root:Restored 75 users

Revision History

Version Authors Date Comments