Alert.png The wiki is deprecated and due to be decommissioned by the end of September 2022.
The content is being migrated to other supports, new updates will be ignored and lost.
If needed you can get in touch with EGI SDIS team using operations @ egi.eu.

Difference between revisions of "MAN10"

From EGIWiki
Jump to navigation Jump to search
 
(113 intermediate revisions by 5 users not shown)
Line 4: Line 4:
|Doc_title = Cloud Resource Centre Installation Manual
|Doc_title = Cloud Resource Centre Installation Manual
|Doc_link = [[MAN09|https://wiki.egi.eu/wiki/MAN10]]
|Doc_link = [[MAN09|https://wiki.egi.eu/wiki/MAN10]]
|Version =  11 March 2016
|Version =  19 May 2017
|Policy_acronym = OMB
|Policy_acronym = OMB
|Policy_name = Operations Management Board
|Policy_name = Operations Management Board
Line 13: Line 13:
}}  
}}  


{{Template:Block-comment
| name=Warning
| text=The installation manual is now available at https://docs.egi.eu/. Information below just points to the relevant sections of that manual
}}


= Common prerequirements and documentation =
General minimal requirements are:
* Very minimal hardware is required to join. Hardware requirements depend on:
** the cloud stack you use
** the amount of resources you want to make available
** the number of users/use cases you want to support
*Servers need to authenticate each other in the EGI Federated Cloud context; this is fulfilled using X.509 certificates, so a Resource Centre should be able to obtain server certificates for some services.
*User and research communities are called Virtual Organisations (VO). At least support for 3 VOs is needed to join as a Resource Centre:
** ops and dteam, used for operational purposes as per RC OLA
** fedcloud.egi.eu: this VO provides resources for application prototyping and validation
*The operating systems supported by the EGI Federated Cloud Management Framework are:
** Scientific Linux 6, CentOS7 (and in general RHEL-compatible)
** Ubuntu (and in general Debian-based)
In order to configure Virtual Organisations and private image lists, please consider the following guides to:
* [[HOWTO16|enable a Virtual Organisation on a EGI Federated Cloud site]]
* [https://wiki.appdb.egi.eu/main:faq:how_to_get_access_to_vo-wide_image_lists get access to VO wide image lists]
* [https://wiki.appdb.egi.eu/main:faq:how_to_subscribe_to_a_private_image_list_using_the_vmcatcher subscribe to a private image list]
= Integrating OpenStack  =
Integration with FedCloud requires a working OpenStack installation as a prerequirement (see http://docs.openstack.org/ for details). There are packages ready to use for most distributions (check for example [https://www.rdoproject.org/ RDO] for RedHat based distributions).
OpenStack integration with FedCloud is known to work with the following versions of OpenStack:
* ''Havana'' (EOL by OpenStack)
* ''Icehouse'' (EOL by OpenStack)
* ''Juno'' (EOL by OpenStack)
* '''Kilo''' (Security-supported, EOL: 2016-05-02)
* '''Liberty''' (Current stable release)
See http://releases.openstack.org/ for more details on the OpenStack releases.
== Integration components ==
Which components must be installed and configured depends on the services the RC wants to provide.
* Keystone must be always available
* If providing '''VM Management''' features (OCCI access or OpenStack access), then '''Nova, Cinder and Glance''' must be available; also '''Neutron''' is needed, but nova-network can also be used for legacy installations (see  [http://docs.openstack.org/havana/install-guide/install/yum/content/section_networking-routers-with-private-networks.html here] how to configure per-tenant routers with private networks).
[[File:Openstack-fedcloud.png|800px]]
As you can see from the schema above, the integration is performed installing some EGI extensions on top of the OpenStack components.
*'''Keystone-VOMS Authorization plugin''' allow users with a valid VOMS proxy to access the OpenStack deployment
*'''OpenStack OCCI Interface (ooi)''' translates between OpenStack API and OCCI
*'''cASO''' collects accounting data from OpenStack
*'''SSM''' sends the records extracted by cASO to the central accounting database on the EGI Accounting service (APEL)
*'''BDII cloud provider''' registers the RC configuration and description through the EGI Information System to facilitate service discovery
*'''vmcatcher''' checks the [https://appdb.egi.eu/browse/cloud EGI App DB] for new or updated images that can be provided by the RC to the user communities (VO) supported
*The vmcatcher hooks ('''glancepush''' and '''OpenStack handler for vmcatcher''') push updated subscribed images from vmcatcher to Glance, using Openstack Python API 
== EGI User Management/AAI (Keystone-VOMS) ==
Every FedCloud site must support authentication of users with X.509 certificates with VOMS extensions. The Keystone-VOMS extension enables this kind of authentication on Keystone.
Documentation on the installation is available on https://keystone-voms.readthedocs.org/
Notes:
* '''You need a host certificate from a recognised CA for your keystone server'''.
* Take into account that using keystone-voms plugin will enforce the use of https for your Keystone service, you will need to update your URLs at the Keystone catalog and in the configuration of your services:
** You will probably need to include your CA to your system's CA bundle to avoid certificate validation issues: <code>/etc/ssl/certs/ca-certificates.crt</code> from the <code>ca-certificates</code> package on Debian/Ubuntu systems or <code>/etc/pki/tls/certs/ca-bundle.crt</code> from the <code>ca-certificates</code> on RH and derived systems. The [[Federated_Cloud_APIs_and_SDKs#CA_CertificatesCheck|Federated Cloud OpenStack Client guide]] includes information on how to do it.
** replace http with https in <code>auth_[protocol|uri|url]</code> and <code>auth_[host|uri|url]</code> in the nova, cinder, glance and neutron config files (<code>/etc/nova/nova.conf</code>, <code>/etc/nova/api-paste.ini</code>, <code>/etc/neutron/neutron.conf</code>, <code>/etc/neutron/api-paste.ini</code>, <code>/etc/neutron/metadata_agent.ini</code>, <code>/etc/cinder/cinder.conf</code>, <code>/etc/cinder/api-paste.ini</code>, <code>/etc/glance/glance-api.conf</code>, <code>/etc/glance/glance-registry.conf</code>, <code>/etc/glance/glance-cache.conf</code>) and any other service that needs to check keystone tokens.
** You can update the URLs of the services directly in the database:
<pre>
mysql> use keystone;
mysql> update endpoint set url="https://<keystone-host>:5000/v2.0" where url="http://<keystone-host>:5000/v2.0";
mysql> update endpoint set url="https://<keystone-host>:35357/v2.0" where url="http://<keystone-host>:35357/v2.0";
</pre>
* Support for EGI VOs: [[HOWTO16 | VOMS configuration]], you should configure fedcloud.egi.eu, dteam and ops VOs.
* VOMS-Keystone configuration: most sites should enable the <code>autocreate_users</code> option in the <code>[voms]</code> section of [https://keystone-voms.readthedocs.org/en/latest/configuration.html Keystone-VOMS configuration]. This will enable that new users are automatically created in your local keystone the first time they login into your site.
* if (and only if) you need to configure the Per-User Subproxy (PUSP) feature, please follow the specific guide at [[Long-tail_of_science#Instructions_for_OpenStack_providers]]
== EGI Virtual Machine Management Interface ==
EGI currently operates two realms: the Open Standards Realm and the OpenStack Realm. Both are completely integrated with the EGI federation services but they expose different interfaces to offer IaaS capabilities to the users. The Open Standards Realm uses OCCI standard (supported by providers with OpenNebula, OpenStack and Synnefo cloud management frameworks), while the OpenStack Realm uses the OpenStack native Nova API (support limited to OpenStack providers).
You can provide your service in one or both of the realms. For the OpenStack Realm, you just need to declare your endpoint in GOCDB as described below. For the Open Standards realm you will need to deploy an additional service for providing OCCI access.
[https://github.com/openstack/ooi ooi] is the recommended software to provide OCCI for OpenStack (from Juno onwards). Installation and configuration of ooi is available at [http://ooi.readthedocs.org/en/stable/index.html ooi documentation]. Packages for several distributions can be found at [https://appdb.egi.eu/store/software/ooi ooi entry at EGI's AppDB] (recommended version is 0.2.0)
For older OpenStack releases [https://github.com/EGI-FCTF/OCCI-OS OCCI-OS] can be used. Follow the <code>README.md</code> file in the github repo for instructions on installation and configuration. Be sure to select the branch (e.g. <code>stable/icehouse</code>) corresponding to your OpenStack deployment.
Once the OCCI interface is installed, you should register it on your installation (adapt the region and URL to your deployment):
<pre>
$ openstack service create --name occi --description "OCCI Interface" occi
+-------------+----------------------------------+
| Field      | Value                            |
+-------------+----------------------------------+
| description | OCCI Interface                  |
| enabled    | True                            |
| id          | 6dfd6a56c9a6456b84e8c86038e58f56 |
| name        | occi                            |
| type        | occi                            |
+-------------+----------------------------------+
$ openstack endpoint create --region RegionOne occi --publicurl http://172.16.4.70:8787/occi1.1
+-------------+----------------------------------+
|  Property  |              Value              |
+-------------+----------------------------------+
| description |          OCCI service          |
|      id    | 8e6de5d0d7624584bed6bec9bef7c9e0 |
|    name    |            occi_api            |
|    type    |              occi              |
+-------------+----------------------------------+
</pre>
== Integration with EGI FedCloud Appliance ==
The EGI FedCloud Appliance packages a set of docker containers to federate a OpenStack deployment with some EGI services:
* Information System  (BDII)
* Accounting (cASO,  SSM)
* Image management (atrope)
You can get the current version of the appliance at [https://appdb.egi.eu/store/vappliance/fedcloud.integration.appliance.openstack AppDB entry]. It is available as an OVA file. You can easily extract the VMDK disk of the OVA by untaring the file.
=== Pre-requisites ===
The appliance works by querying the public APIs of an existing OpenStack installation. It assumes [http://keystone-voms.readthedocs.org/ Keystone-VOMS] is installed at that OpenStack and the <code>voms.json</code> file is properly configured.
The appliance uses the following OpenStack APIs:
* nova, for getting images and flavors available and to get usage information
* keystone, for authentication and for getting the available tenants
* glance, for querying, uploading and removing VM images.
Not all services need to be accessed with the same credentials. Each component is individually configured.
A host certificate is to send the accounting information before sending it to the accounting repository. DN of the host certificate must be registered in GOCDB service type eu.egi.cloud.accounting (see the [[MAN10#Registration.2C_validation_and_certification|registration section]] below for more information).
'''Note:'''
* VM Image replication requires large disk space for storing the downloaded images. By default these are stored at <code>/image_data</code>. You can mount a volume at that location.
* The appliance should be accessible by the EGI Information System. EGI information system will check GOCDB for the exact location of your appliance (see the [[MAN10#Registration.2C_validation_and_certification|registration section]] below for more information).
=== EGI Accounting (cASO/SSM) ===
There are two different processes handling the accounting integration:
* cASO, which connects to the OpenStack deployment to get the usage information, and,
* ssmsend, which sends that usage information to the central EGI accounting repository.
They are run by cron every hour (cASO) and every six hours (ssmsend).
[http://caso.readthedocs.org/en/latest/configuration.html cASO configuration] is stored at <code>/etc/caso/caso.conf</code>. Most default values are ok, but you must set:
* <code>site_name</code> (line 100)
* <code>tenants</code> (line 104)
* credentials to access the accounting data (lines 122-128). Check the [http://caso.readthedocs.org/en/latest/configuration.html#openstack-configuration cASO documentation] for the expected permissions of the user configured here.
The cron job will use the voms mapping file at <code>/etc/voms.json</code>.
cASO will write records to <code>/var/spool/apel</code> where ssmsend will take them.
SSM configuration is available at <code>/etc/apel</code>. Defaults should be ok for most cases. The cron file uses <code>/etc/grid-security</code> for the CAs and the host certificate and private keys (in <code>/etc/grid-security/hostcert.pem</code> and <code>/etc/grid-security/hostkey.pem</code>).
==== Running the services ====
Both caso and ssmsend are run via cron scripts. They are located at <code>/etc/cron.d/caso</code> and <code>/etc/crond.d/ssmsend</code> respectively. For convenience there are also two scripts <code>/usr/loca/bin/caso-extract.sh</code> and <code>/usr/local/bin/ssm-send.sh</code> that run the docker container with the proper volumes.
=== EGI Information System (BDII) ===
Information discovery provides a real-time view about the actual images and flavors available at the OpenStack for the federation users. It has two components:
* Resource-Level BDII: which queries the OpenStack deployment to get the information to publish
* Site-Level BDII: gathers information from several resource-level BDIIs (in this case only 1) and makes it publicly available for the EGI information system.
==== Resource-level BDII ====
This is provided by container <code>egifedcloud/cloudbdii</code>. You need to configure:
* <code>/etc/cloud-info-provider/openstack.rc</code>, with the credentials to query your OpenStack. The user configured just needs to be able to access the lists of images and flavors.
* <code>/etc/cloud-info-provider/openstack.yaml</code>, this file includes the static information of your deployment. Make sure to set the <code>SITE-NAME</code> as defined in GOCDB.
==== Site-level BDII ====
The <code>egifedcloud/sitebdii</code> container runs this process. Configuration files:
* <code>/etc/sitebdii/glite-info-site-defaults.conf</code>. Set here the name of your site (as defined in GOCDB) and the public hostname where the appliance will be available.
* <code>/etc/sitebdii/site.cfg</code>. Include here basic information on your site.
==== Running the services ====
In order to run the information discovery containers, there is a docker-compose file at <code>/etc/sitebdii/docker-compose.yml</code>. Run it with:
docker-compose -f /etc/sitebdii/docker-compose.yml up -d
Check the status with:
docker-compose -f /etc/sitebdii/docker-compose.yml ps
You should be able to get the BDII information with an LDAP client, e.g.:
ldapsearch -x -p 2170 -h <yourVM.hostname.domain.com> -b o=glue
=== EGI Image Management (atrope) ===
The appliance provide VMI replication with [https://github.com/alvarolopez/atrope atrope], an alternative implementation to vmcatcher. Every 12 hours, the appliance will perform the following actions:
* download the configured lists in <code>/etc/atrope/hepix.yaml</code> and verify its signature
* check any changes in the lists and download new images
* synchronise this information to the configured glance endpoint
Configure the glance credentials in the <code>/etc/atrope/atrope.conf</code> file and add the lists you want to download at the <code>/etc/atrope/hepix.yaml</code>. See the following example for fedcloud.egi.vo list:
<pre>
# This must match the VO name configured at the voms.json file
fedcloud.egi.eu:
    url: https://vmcaster.appdb.egi.eu/store/vo/fedcloud.egi.eu/image.list
    enabled: true
    # All image lists from AppDB will have this endorser
    endorser:
        dn: '/DC=EU/DC=EGI/C=NL/O=Hosts/O=EGI.eu/CN=appdb.egi.eu'
        ca: "/DC=ORG/DC=SEE-GRID/CN=SEE-GRID CA 2013"
    # You must get this from AppDB
    token: 17580f07-1e33-4a38-94e3-3386daced5be
    # if you want to restrict the images downloaded from the AppDB, you can add here a list of the identifiers
    # check the "dc:identifier" field in the image list file.
    images: []
    # images names will prefixed with this string for easy identification
    prefix: "FEDCLOUD "
</pre>
Check [https://wiki.appdb.egi.eu/main:faq:how_to_subscribe_to_a_private_image_list_using_the_vmcatcher How to subscribe to a private image list] for instructions to get the URL and token. The <code>prefix</code> if specified will be added in the image title in glance. You can define a subset of images to download with the <code>images</code> field.
==== Running the service ====
atrope is run via a cron scripts: <code>/etc/cron.d/atrope</code>. For convenience the <code>/usr/loca/bin/atrope-dispatch.sh</code> script runs the docker container with the proper volumes.
== Integration with individual components ==
=== EGI Accounting (cASO/SSM) ===
Every cloud RC should publish utilization data to the EGI accounting database. You will need to install '''cASO''', a pluggable extractor of Cloud Accounting Usage Records from OpenStack.
Documentation on how to install and configure cASO is available at https://caso.readthedocs.org/en/latest/
In order to send the records to the accounting database, you will also need to configure '''SSM''', whose documentation can be found at https://github.com/apel/ssm
=== EGI Information System (BDII) ===
Sites must publish information to EGI information system which is based on BDII. The BDII can be installed easily directly from the distribution repository, the package is usually named "bdii".
There is a common cloud information provider for all cloud management frameworks that collects the information from the used CMF and send them to the aforementioned BDII. It can be installed on the same machine as the BDII or on another machine. The installation and configuration guide for the cloud information provider can be found in the following [[HOWTO15|Fedclouds BDII instructions]]; more detailed installation and configuration instructions are available at https://github.com/EGI-FCTF/cloud-bdii-provider
=== EGI Image Management (vmcatcher, glancepush) ===
Sites in FedCloud offering VM management capability must give access to VO-endorsed VM images. This functionality is provided with vmcatcher (that is able to subscribe to the image lists available in AppDB) and a set of tools that are able to push the subscribed images into the glance catalog. In order to subscribe to VO-wide image lists, you need to have a valid access token to the AppDB. Check [https://wiki.appdb.egi.eu/main:faq:how_to_get_access_to_vo-wide_image_lists how to access to VO-wide image lists] and [https://wiki.appdb.egi.eu/main:faq:how_to_subscribe_to_a_private_image_list_using_the_vmcatcher how to subscribe to a private image list] documentation for more information.
Please refer to [https://github.com/hepix-virtualisation/vmcatcher vmcatcher documentation] for installation. 
Vmcatcher may be branched to Openstack Glance catalog using [https://appdb.egi.eu/store/software/python.glancepush python-glancepush] tool and [https://appdb.egi.eu/store/software/openstack.handler.for.vmcatcher Openstack Handler for Vmcatcher] event handler. To install and configure glancepush and the handler, you can refer to the following instructions:
*Install the latest release of glancepush from https://appdb.egi.eu/store/software/python.glancepush
**for debian based systems, just download the tarball, extract it, and execute python setup.py install
[stack@ubuntu]$ wget http://repository.egi.eu/community/software/python.glancepush/0.0.X/releases/generic/0.0.6/python-glancepush-0.0.6.tar.gz
[stack@ubuntu]$ tar -zxvf python-glancepush-0.0.6.tar.gz
[stack@ubuntu]$ python setup.py install
**for RHEL6 you can run:
[stack@rhel]$ yum localinstall http://repository.egi.eu/community/software/python.glancepush/0.0.X/releases/sl/6/x86_64/RPMS/python-glancepush-0.0.6-1.noarch.rpm
*Then, configure glancepush directories
[stack@ubuntu]$ sudo mkdir -p /var/spool/glancepush /etc/glancepush/log /etc/glancepush/transform/ /etc/glancepush/clouds /var/log/glancepush
[stack@ubuntu]$ sudo chown stack:stack -R /var/spool/glancepush /etc/glancepush /var/log/glancepush/
*Copy the file /etc/keystone/voms.json to /etc/glancepush/voms.json. Then create a file in clouds file for every VO to which you are subscribed. For example, if you're subscribed to fedcloud, atlas and lhcb, you'll need 3 files in the /etc/glancepush/clouds directory with the credentials for this VO/tenants, for example:
[general]
# Tenant for this VO. Must match the tenant defined in voms.json file
testing_tenant=egi
# Identity service endpoint (Keystone)
endpoint_url=https://server4-eupt.unizar.es:5000/v2.0
# User Password
password=123456
# User
username=John
# Set this to true if you're NOT using self-signed certificates
is_secure=True
# SSH private key that will be used to perform policy checks (to be done)
ssh_key=Carlos_lxbifi81
# WARNING: Only define the next variable if you're going to need it. Otherwise you may encounter problems
cacert=path_to_your_cert
*Install [https://appdb.egi.eu/store/software/openstack.handler.for.vmcatcher Openstack handler for vmcatcher]. For debian based systems, just download the tarball, extract it and execute python setup.py install
[stack@ubuntu]$ wget http://repository.egi.eu/community/software/openstack.handler.for.vmcatcher/0.0.X/releases/generic/0.0.7/gpvcmupdate-0.0.7.tar.gz
[stack@ubuntu]$ tar -zxvf gpvcmupdate-0.0.7.tar.gz
[stack@ubuntu]$ python setup.py install
while for RHEL6 you can run:
[stack@rhel]$ yum localinstall http://repository.egi.eu/community/software/openstack.handler.for.vmcatcher/0.0.X/releases/sl/6/x86_64/RPMS/gpvcmupdate-0.0.7-1.noarch.rpm
*Create the vmcatcher folders for OpenStack
[stack@ubuntu]$ mkdir -p /opt/stack/vmcatcher/cache /opt/stack/vmcatcher/cache/partial /opt/stack/vmcatcher/cache/expired
*Check that vmcatcher is running properly by listing and subscribing to an image list
[stack@ubuntu]$ export VMCATCHER_RDBMS="sqlite:////opt/stack/vmcatcher/vmcatcher.db"
[stack@ubuntu]$ vmcatcher_subscribe -l
[stack@ubuntu]$ vmcatcher_subscribe -e -s https://vmcaster.appdb.egi.eu/store/vappliance/tinycorelinux/image.list
[stack@ubuntu]$ vmcatcher_subscribe -l
8ddbd4f6-fb95-4917-b105-c89b5df99dda    True    None    https://vmcaster.appdb.egi.eu/store/vappliance/tinycorelinux/image.list
*Create a CRON wrapper for vmcatcher, named <code>$HOME/gpvcmupdate/vmcatcher_eventHndl_OS_cron.sh</code>, using the following code
#!/bin/bash
#Cron handler for VMCatcher image syncronization script for OpenStack
#Vmcatcher configuration variables
export VMCATCHER_RDBMS="sqlite:////opt/stack/vmcatcher/vmcatcher.db"
export VMCATCHER_CACHE_DIR_CACHE="/opt/stack/vmcatcher/cache"
export VMCATCHER_CACHE_DIR_DOWNLOAD="/opt/stack/vmcatcher/cache/partial"
export VMCATCHER_CACHE_DIR_EXPIRE="/opt/stack/vmcatcher/cache/expired"
export VMCATCHER_CACHE_EVENT="python $HOME/gpvcmupdate/gpvcmupdate.py -D"
#Update vmcatcher image lists
vmcatcher_subscribe -U
#Add all the new images to the cache
for a in `vmcatcher_image -l | awk '{if ($2==2) print $1}'`; do
  vmcatcher_image -a -u $a
done
#Update the cache
vmcatcher_cache -v -v
#Run glancepush
/usr/bin/glancepush.py
*Set the newly created file as executable
[stack@ubuntu]$ chmod +x $HOME/gpvcmupdate/vmcatcher_eventHndl_OS_cron.sh
*Test that the vmcatcher handler is working correctly by running
[stack@ubuntu]$ $HOME/gpvcmupdate/vmcatcher_eventHndl_OS_cron.sh
INFO:main:Defaulting actions as 'expire', and 'download'.
DEBUG:Events:event 'ProcessPrefix' executed 'python /opt/stack/gpvcmupdate/gpvcmupdate.py'
DEBUG:Events:stdout=
DEBUG:Events:stderr=Ignoring ProcessPrefix event.
INFO:DownloadDir:Downloading '541b01a8-94bd-4545-83a8-6ea07209b440'.
DEBUG:Events:event 'AvailablePrefix' executed 'python /opt/stack/gpvcmupdate/gpvcmupdate.py'
DEBUG:Events:stdout=AvailablePrefix
DEBUG:Events:stderr=
INFO:CacheMan:moved file 541b01a8-94bd-4545-83a8-6ea07209b440
DEBUG:Events:event 'AvailablePostfix' executed 'python /opt/stack/gpvcmupdate/gpvcmupdate.py'
DEBUG:Events:stdout=AvailablePostfixCreating Metadata Files
DEBUG:Events:stderr=
DEBUG:Events:event 'ProcessPostfix' executed 'python /opt/stack/gpvcmupdate/gpvcmupdate.py'
DEBUG:Events:stdout=
DEBUG:Events:stderr=Ignoring ProcessPostfix event.
<br>
*Add the following line to the stack user crontab:
50 */6 * * * $HOME/gpvcmupdate/vmcatcher_eventHndl_OS_cron.sh &gt;&gt; /var/log/glancepush/vmcatcher.log 2&gt;&amp;1
''NOTES:''
*It is recommended to execute glancepush and vmcatcher_cache as stack or other non-root user.
*VMcatcher expired images are removed from OS.
== Post-installation ==
After the installation of all the needed components, it is recommended to set the following policies on Nova to avoid users accessing other users resources:
<pre>
[root@egi-cloud]# sed -i 's|"admin_or_owner":  "is_admin:True or project_id:%(project_id)s",|"admin_or_owner":  "is_admin:True or project_id:%(project_id)s",\n    "admin_or_user":  "is_admin:True or user_id:%(user_id)s",|g' /etc/nova/policy.json
[root@egi-cloud]# sed -i 's|"default": "rule:admin_or_owner",|"default": "rule:admin_or_user",|g' /etc/nova/policy.json
[root@egi-cloud]# sed -i 's|"compute:get_all": "",|"compute:get": "rule:admin_or_owner",\n    "compute:get_all": "",|g' /etc/nova/policy.json
</pre>
== Registration, validation and certification ==
As mentioned in the [https://wiki.egi.eu/wiki/Federated_Cloud_resource_providers_support main page], RC services must be '''registered''' in the [https://goc.egi.eu EGI Configuration Management Database (GOCDB)]. If you are creating a new site for your cloud services, please follow the [https://wiki.egi.eu/wiki/PROC09 Resource Centre Registration and Certification] with the help of EGI Operations and your reference Resource Infrastructure.
You will need to register the following services (all of them can be provided by the Federated Cloud Appliance):
* '''Site-BDII'''. This service collects and publishes site's data for the Information System. Existing sites should already have this registered.
* '''eu.egi.cloud.accounting'''. Register here the host sending the records to the accounting repository (executing SSM send).
* '''eu.egi.cloud.vm-metadata.vmcatcher''' for the VMI replication mechanism. Register here the host providing the replication.
If offering OCCI interface, the site must register also:
* '''eu.egi.cloud.vm-management.occi''' for the OCCI endpoint offered by the site. Please note the special endpoint URL syntax described at [[Federated_Cloud_Technology#eu.egi.cloud.vm-management.occi|GOCDB usage in FedCloud]]
If offering native OpenStack access, you must register:
* '''org.openstack.nova''' for the Nova endpoint of the site.  Please note the special endpoint URL syntax described at [[Federated_Cloud_Technology#org.openstack.nova|GOCDB usage in FedCloud]]
Site should also declare the following properties using the ''Site Extension Properties'' feature:
*# Max number of virtual cores for VM with parameter name: <code>cloud_max_cores4VM</code>
*# Max amount of RAM for VM with parameter name: <code>cloud_max_RAM4VM</code> using the format: value+unit, e.g. "16GB".
*# Max amount of storage that could be mounted in a VM with parameter name: <code>cloud_max_storage4VM</code> using the format: value+unit, e.g. "16GB".
The '''installation validation''' is part of the aforementioned [https://wiki.egi.eu/wiki/PROC09 Resource Centre Registration and Certification] procedure. After you register the services in GOCDB, EGI Operations will test your services using the [[HOWTO04_Site_Certification_Manual_tests#Check_the_functionality_of_the_cloud_elements|site certification manual tests]] mentioned in the same procedure. It is important to use that guide to test the services published to check that they are behaving properly.
Once the site services are registered in GOCDB (and flagged as "monitored") they will appear in the EGI service monitoring tools. EGI will check the status of the services (see [https://wiki.egi.eu/wiki/Federated_Cloud_infrastructure_status Infrastructure Status] for details). Check if your services are present in the EGI service monitoring tools and passing the tests; if you experience any issues (services not shown, services are not OK...) please contact back EGI Operations or your reference Resource Infrastructure.
= Integrating OpenNebula  =
EGI Cloud Site based on OpenNebula is an ordinary OpenNebula installation with some EGI-specific integration components. There are no additional requirements placed on internal site architecture.
[[File:OpenNebulaSite.png]]
The following '''components''' must be installed alongside OpenNebula:
* '''vmcatcher''', which checks the [https://appdb.egi.eu/browse/cloud EGI App DB] for new or updated images that need to be supported on the site. It downloads images and registers them with OpenNebula, so that they can be used in resource instantiation. Vmcatcher configuration is [[#EGI_Image_Management_2|explained bellow]].
* '''rOCCI-server''', which provides a standard OCCI interface. It translates between OpenNebula API and OCCI. It must be configured to use its ''opennebula'' backend, and to use ''voms'' for authentication. Follow the [[rOCCI:ROCCI-server_Admin_Guide|rOCCI-server Admin Guide]] for installation, and check [[#rOCCI-server + VOMS|bellow]] for FedCloud-specific configuration.
* '''local perun scripts''', which allow perun to set up, block and remove user accounts from OpenNebula, thus managing the full life cycle of a user account. Local script configuration is [[#Perun integration|explained bellow]].
* '''oneacct''' scripts, which collect accounting data from OpenNebula and publish those into EGI's APEL instance. Oneacct configuration is explained at the [[Fedcloud-tf:WorkGroups:Scenario4#OpenNebula_Accounting_Scripts|FedCloud Accounting]] page.
* '''BDII''', which registers the site's configuration and description through the EGI Information System to facilitate service discovery. Configuration is [[#EGI_Information_System_2|explained bellow]].
Please consider that:
* '''CDMI''' storage endpoints are currently '''not supported''' for OpenNebula-based sites.
* OpenNebula ''Sunstone'' is '''not''' required!
The following '''ports''' must be open to allow access to an OpenNebula-based FedCloud sites:
{| class="wikitable" style="margin: auto; margin-top: 30px; margin-bottom: 30px;"
|+ Open Ports for OpenNebula and other components in FedCloud!
! style="width: 90px;" | Port
! style="width: 110px;" | Application
! style="width: 430px;" | Host
! style="width: 250px;" | Note
|-
|'''22'''/TCP
|'''SSH'''
|OpenNebula '''Server''' Node
|<code>one</code>tools, Perun scripts
|-
|'''2170'''/TCP
|'''BDII'''/LDAP
|BDDI Node (typically the OpenNebula '''Server''' Node)
|EGI Service Discovery
|-
|'''11443'''/TCP
|'''OCCI'''/HTTPs
|'''rOCCI-server''' node (typically the OpenNebula Server Node but can be located elsewhere)
|OCCI cloud resource management
|}
By nature, open ports cannot be specified for '''OpenNebula hosts''', which are used to run virtual machines. Their requirements for open ports cannot be known beforehand.
This is an overview of '''service accounts''' used in an OpenNebula-based FedCloud site. The names are default and can be changed if required.
{| class="wikitable" style="margin: auto; margin-top: 30px; margin-bottom: 30px;"
|+ Service Accounts in OpenNebula sites in FedCloud!
! style="width: 90px;" | Type
! style="width: 110px;" | Account name
! style="width: 180px;" | Host
! style="width: 500px;" | Use
|-
|rowspan="4"|System accounts
|<code>oneadmin</code>
|OpenNebula Server
|'''Default''' management account in OpenNebula. Also used by the '''Perun''' scripts, which access the account with SSH.
|-
|<code>rocci</code>
|rOCCI-server host (typically OpenNebula server)
|Apache application processes for the '''rOCCI-server'''. It is only a service account, no access required.
|-
|<code>apel</code>
|OpenNebula server
|Service account used to run '''APEL export''' scripts. Just a service account, no access required.
|-
|<code>openldap</code>
|OpenNebula server
|Service account used to run LDAP for '''BDII'''. Just a service account, no access required.
|-
|OpenNebula accounts
|<code>rocci</code>
|OpenNebula Server
|Used by the '''rOCCI-server''' to perform tasks through the OpenNebula API.
|}
Follow [http://opennebula.org/documentation/ OpenNebula Documentation] and '''install OpenNebula with enabled X.509 authentication support'''.
The following OpenNebula versions are supported:
* OpenNebula v4.4.x (legacy)
* OpenNebula v4.6.x
* OpenNebula v4.8.x
* OpenNebula v4.10.x
* OpenNebula v4.12.x
* OpenNebula v4.14.x
Integration Prerequisites:
* Working OpenNebula installation with X.509 support enabled. Resource Centres are encouraged to follow the [http://docs.opennebula.org/4.12/administration/authentication/x509_auth.html step-by-step configuration guide provided by OpenNebula developers]. There is no need to change authentication driver for the oneadmin user or create any user accounts manually at this time.
* Valid IGTF-trusted host certificates for selected hosts.
=== EGI Virtual Machine Management Interface -- OCCI ===
See [[rOCCI:ROCCI-server_Admin_Guide|rOCCI-server Installation Guide]].
=== EGI User Management/AAI ===
==== rOCCI-server + VOMS  ====
*Configure OpenNebula's x509 auth, modify /etc/one/auth/x509_auth.conf file:
# Path to the trusted CA directory. It should contain the trusted CA's for
# the server, each CA certificate shoud be name CA_hash.0
:ca_dir: "/etc/grid-security/certificates"
For more information have a look at the official OpenNebula documentation [http://opennebula.org/documentation]
*rOCCI-server
Example VHOST configuration file for Apache2 with only VOMS authentication enabled:
<pre>
<VirtualHost *:11443>
    # if you wish to change the default Ruby used to run this app
    PassengerRuby /opt/occi-server/embedded/bin/ruby
    # enable SSL
    SSLEngine on
    # for security reasons you may restrict the SSL protocol, but some clients may fail if SSLv2 is not supported
    SSLProtocol All -SSLv2 -SSLv3
    # this should point to your server host certificate
    SSLCertificateFile /etc/grid-security/hostcert.pem
    # this should point to your server host key
    SSLCertificateKeyFile /etc/grid-security/hostkey.pem
    # directory containing the Root CA certificates and their hashes
    SSLCACertificatePath /etc/grid-security/certificates
    # directory containing CRLs
    SSLCARevocationPath /etc/grid-security/certificates
    # set to optional, this tells Apache to attempt to verify SSL certificates if provided
    # for X.509 access with GridSite/VOMS, however, set to 'require'
    #SSLVerifyClient optional
    SSLVerifyClient require
    # if you have multiple CAs in the file above, you may need to increase the verify depht
    SSLVerifyDepth 10
    # enable passing of SSL variables to passenger. For GridSite/VOMS, enable also exporting certificate data
    SSLOptions +StdEnvVars +ExportCertData
    # configure OpenSSL inside rOCCI-server to validate peer certificates (for CMFs)
    #SetEnv SSL_CERT_FILE /path/to/ca_bundle.crt
    SetEnv SSL_CERT_DIR  /etc/grid-security/certificates
    # set RackEnv
    RackEnv production
    LogLevel info
    ServerName occi.host.example.org
    # important, this needs to point to the public folder of your rOCCI-server
    DocumentRoot /opt/occi-server/embedded/app/rOCCI-server/public
    <Directory /opt/occi-server/embedded/app/rOCCI-server/public>
        ## variables (and is needed for gridsite-admin.cgi to work.)
        GridSiteEnvs on
        ## Nice GridSite directory listings (without truncating file names!)
        GridSiteIndexes off
        ## If this is greater than zero, we will accept GSI Proxies for clients
        ## (full client certificates - eg inside web browsers - are always ok)
        GridSiteGSIProxyLimit 4
        ## This directive allows authorized people to write/delete files
        ## from non-browser clients - eg with htcp(1)
        GridSiteMethods ""
        Allow from all
        Options -MultiViews
    </Directory>
    # configuration for Passenger
    PassengerUser rocci
    PassengerGroup rocci
    PassengerMinInstances 3
    PassengerFriendlyErrorPages off
    # configuration for rOCCI-server
    ## common
    SetEnv ROCCI_SERVER_LOG_DIR /var/log/occi-server
    SetEnv ROCCI_SERVER_ETC_DIR /etc/occi-server
    SetEnv ROCCI_SERVER_PROTOCOL              https
    SetEnv ROCCI_SERVER_HOSTNAME              occi.host.example.org
    SetEnv ROCCI_SERVER_PORT                  11443
    SetEnv ROCCI_SERVER_AUTHN_STRATEGIES      "voms"
    SetEnv ROCCI_SERVER_HOOKS                oneuser_autocreate
    SetEnv ROCCI_SERVER_BACKEND              opennebula
    SetEnv ROCCI_SERVER_LOG_LEVEL            info
    SetEnv ROCCI_SERVER_LOG_REQUESTS_IN_DEBUG no
    SetEnv ROCCI_SERVER_TMP                  /tmp/occi_server
    SetEnv ROCCI_SERVER_MEMCACHES            localhost:11211
    ## experimental
    SetEnv ROCCI_SERVER_ALLOW_EXPERIMENTAL_MIMES no
    ## authN configuration
    SetEnv ROCCI_SERVER_AUTHN_VOMS_ROBOT_SUBPROXY_IDENTITY_ENABLE  no
    ## hooks
    #SetEnv ROCCI_SERVER_USER_BLACKLIST_HOOK_USER_BLACKLIST          "/path/to/yml/file.yml"
    #SetEnv ROCCI_SERVER_USER_BLACKLIST_HOOK_FILTERED_STRATEGIES    "voms x509 basic"
    SetEnv ROCCI_SERVER_ONEUSER_AUTOCREATE_HOOK_VO_NAMES            "dteam ops"
    ## ONE backend
    SetEnv ROCCI_SERVER_ONE_XMLRPC  http://localhost:2633/RPC2
    SetEnv ROCCI_SERVER_ONE_USER    rocci
    SetEnv ROCCI_SERVER_ONE_PASSWD  yourincrediblylonganddifficulttoguesspassword
</VirtualHost>
</pre>
It is strongly recommended to set '''SSLVerifyClient require''' and '''SetEnv ROCCI_SERVER_AUTHN_STRATEGIES "voms"'''!
* Support for EGI VOs: [[HOWTO16 | VOMS configuration]]
* Create empty groups ''fedcloud.egi.eu'', ''ops'' and ''dteam'' in OpenNebula.
==== Perun integration ====
The current rOCCI-server implementation doesn’t handle user management and identity propagation hence integration with a third-party service is necessary. The [https://perun.metacentrum.cz/perun-gui-cert/ Perun VO] management server developed and maintained by CESNET is used to provide user management capabilities for OpenNebula Resource Centres. It uses locally installed scripts (fully under the control of the Resource Centres in question) to propagate changes in the user pool to all registered Resource Centres. They are required to install and configure (if need be) these scripts and report back to EGI Cloud Federation for registration in Perun. Installation and configuration details are available online in the [https://github.com/EGI-FCTF/fctf-perun EGI-FCTF/fctf-perun github repository].
Remember that Perun requires '''SSH access''' to your machine, so that it can invoke the scripts and push user account changes to your site!
==== Manual account management ====
If you want to use X.509/VOMS authentication for your users, you need to create users in OpenNebula with the X.509 driver. For a user named 'johnsmith' from the <code>fedcloud.egi.eu</code> VO the command may look like this
$ oneuser create johnsmith "/DC=es/DC=irisgrid/O=cesga/CN=johnsmith/VO=fedcloud.egi.eu/Role=NULL/Capability=NULL" --driver x509
*And its properties:
$ oneuser update &lt;id_x509_user&gt;
X509_DN="/DC=es/DC=irisgrid/O=cesga/CN=johnsmith"
=== EGI Accounting ===
See [[Fedcloud-tf:WorkGroups:Scenario4#OpenNebula_Accounting_Scripts|OpenNebula Accounting Scripts]].
=== EGI Information System ===
Sites must publish information to EGI information system which is based on BDII. There is a common [https://github.com/EGI-FCTF/cloud-bdii-provider bdii provider] for all cloud management frameworks. Information on installation and configuration is available in the cloud-bdii-provider [https://github.com/EGI-FCTF/cloud-bdii-provider/blob/master/README.md README.md] and in the [[Fedclouds BDII instructions]], there is a [[Fedclouds_BDII_instructions#OpenNebula_.2B_rOCCI|specific section with OpenNebula details]].
=== EGI Image Management ===
'''Important notice''': the current version of this integration component requires manual intervention from the site administrator when a new appliance/image is registered (NOT on subsequent updates). The site administrator must manually create a Virtual Machine Template and, in this template, reference the image in question by IMAGE and IMAGE_UNAME. This is a temporary workaround and will be removed in the next release of the vmcatcher integration component.
Sites in FedCloud offering VM management capability must give access to VO-endorsed VM images. This functionality is provided with vmcatcher (that is able to subscribe to the image lists available in AppDB) and a set of tools that are able to push the subscribed images into the glance catalog. In order to subscribe to VO-wide image lists, you need to have a valid access token to the AppDB. Check [https://wiki.appdb.egi.eu/main:faq:how_to_get_access_to_vo-wide_image_lists how to access to VO-wide image lists] and [https://wiki.appdb.egi.eu/main:faq:how_to_subscribe_to_a_private_image_list_using_the_vmcatcher how to subscribe to a private image list] documentation for more information.
Please refer to [https://github.com/hepix-virtualisation/vmcatcher vmcatcher documentation] for installation. 
[https://github.com/grid-admin/vmcatcher_eventHndlExpl_ON vmcatcher_eventHndlExpl_ON] is a VMcatcher event handler for OpenNebula to store or disable images based on VMcatcher response. The followign guide will show how to install and configure vmCatcher handler as oneadmin user, directly from github. The configuration will automatically syncronize OpenNebula Image datastore with the registered vmcatcher images.
*Install pre-requisites for VMCatcher handler
[oneadmin@one-sandbox] sudo yum install -y qemu-img
*Install VMcatcher handler from github
[oneadmin@one-sandbox]$ mkdir $HOME/vmcatcher_eventHndlExpl_ON
[oneadmin@one-sandbox]$ cd $HOME/vmcatcher_eventHndlExpl_ON
[oneadmin@one-sandbox]$ wget http://github.com/grid-admin/vmcatcher_eventHndlExpl_ON/archive/v0.0.8.zip -O vmcatcher_eventHndlExpl_ON.zip
[oneadmin@one-sandbox]$ unzip vmcatcher_eventHndlExpl_ON.zip
[oneadmin@one-sandbox]$ mv vmcatcher_eventHndlExpl_ON*/* ./
[oneadmin@one-sandbox]$ rmdir vmcatcher_eventHndlExpl_ON-*
*Create the vmcatcher folders for ON (do not use /var/lib/one/ or other OpenNebula default directories for the vmcatcher cache, since you cannot import images into OpenNebula from these directories. Also, since this directory will host a copy of all the images downloaded via vmcatcher, it is suggested to place the directory into a separate disk)
[oneadmin@one-sandbox]$ sudo mkdir -p /opt/vmcatcher-ON/cache /opt/vmcatcher-ON/cache/partial /opt/vmcatcher-ON/cache/expired /opt/vmcatcher-ON/cache/templates
[oneadmin@one-sandbox]$ sudo chown oneadmin:oneadmin -R /opt/vmcatcher-ON
*Check that vmcatcher is running properly by listing and subscribing to an image list
[oneadmin@one-sandbox]$ export VMCATCHER_RDBMS="sqlite:////opt/vmcatcher-ON/vmcatcher.db"
[oneadmin@one-sandbox]$ vmcatcher_subscribe -l
[oneadmin@one-sandbox]$ vmcatcher_subscribe -e -s https://vmcaster.appdb.egi.eu/store/vappliance/tinycorelinux/image.list
[oneadmin@one-sandbox]$ vmcatcher_subscribe -l
8ddbd4f6-fb95-4917-b105-c89b5df99dda    True    None    https://vmcaster.appdb.egi.eu/store/vappliance/tinycorelinux/image.list
*Create a CRON wrapper for vmcatcher, named <code>/var/lib/one/vmcatcher_eventHndlExpl_ON/vmcatcher_eventHndl_ON_cron.sh</code>, using the following code
#!/bin/bash
#Cron handler for VMCatcher image syncronization script for OpenNebula
#Vmcatcher configuration variables
export VMCATCHER_RDBMS="sqlite:////opt/vmcatcher-ON/vmcatcher.db"
export VMCATCHER_CACHE_DIR_CACHE="/opt/vmcatcher-ON/cache"
export VMCATCHER_CACHE_DIR_DOWNLOAD="/opt/vmcatcher-ON/cache/partial"
export VMCATCHER_CACHE_DIR_EXPIRE="/opt/vmcatcher-ON/cache/expired"
export VMCATCHER_CACHE_EVENT="python $HOME/vmcatcher_eventHndlExpl_ON/vmcatcher_eventHndl_ON"
#Update vmcatcher image lists
vmcatcher_subscribe -U
#Add all the new images to the cache
for a in `vmcatcher_image -l | awk '{if ($2==2) print $1}'`; do
  vmcatcher_image -a -u $a
done
#Update the cache
vmcatcher_cache -v -v
*Test that the vmcatcher handler is working correctly by running
[oneadmin@one-sandbox]$ chmod +x $HOME/vmcatcher_eventHndlExpl_ON/vmcatcher_eventHndl_ON_cron.sh
[oneadmin@one-sandbox]$ $HOME/vmcatcher_eventHndlExpl_ON/vmcatcher_eventHndl_ON_cron.sh
INFO:main:Defaulting actions as 'expire', and 'download'.
DEBUG:Events:event 'ProcessPrefix' executed 'python /var/lib/one/vmcatcher_eventHndlExpl_ON/vmcatcher_eventHndl_ON'
DEBUG:Events:stdout=
DEBUG:Events:stderr=2014-07-16 12:25:49,586;  DEBUG; vmcatcher_eventHndl_ON; main -- Processing event 'ProcessPrefix'
2014-07-16 12:25:49,586; WARNING; vmcatcher_eventHndl_ON; main -- Ignoring event 'ProcessPrefix'
INFO:DownloadDir:Downloading '541b01a8-94bd-4545-83a8-6ea07209b440'.
DEBUG:Events:event 'AvailablePrefix' executed 'python /var/lib/one/vmcatcher_eventHndlExpl_ON/vmcatcher_eventHndl_ON'
DEBUG:Events:stdout=
DEBUG:Events:stderr=2014-07-16 12:26:00,522;  DEBUG; vmcatcher_eventHndl_ON; main -- Processing event 'AvailablePrefix'
2014-07-16 12:26:00,522; WARNING; vmcatcher_eventHndl_ON; main -- Ignoring event 'AvailablePrefix'
INFO:CacheMan:moved file 541b01a8-94bd-4545-83a8-6ea07209b440
DEBUG:Events:event 'AvailablePostfix' executed 'python /var/lib/one/vmcatcher_eventHndlExpl_ON/vmcatcher_eventHndl_ON'
DEBUG:Events:stdout=
DEBUG:Events:stderr=2014-07-16 12:26:00,567;  DEBUG; vmcatcher_eventHndl_ON; main -- Processing event 'AvailablePostfix'
2014-07-16 12:26:00,567;  DEBUG; vmcatcher_eventHndl_ON; HandleAvailablePostfix -- Starting HandleAvailablePostfix for '541b01a8-94bd-4545-83a8-6ea07209b440'
2014-07-16 12:26:00,571;    INFO; vmcatcher_eventHndl_ON; UntarFile -- /opt/vmcatcher-ON/cache/541b01a8-94bd-4545-83a8-6ea07209b440 is an OVA file. Extracting files...
2014-07-16 12:26:00,599;    INFO; vmcatcher_eventHndl_ON; UntarFile -- Converting /opt/vmcatcher-ON/cache/templates/541b01a8-94bd-4545-83a8-6ea07209b440/CoreLinux-disk1.vmdk to raw format.
2014-07-16 12:26:00,641;    INFO; vmcatcher_eventHndl_ON; UntarFile -- New RAW image created: /opt/vmcatcher-ON/cache/templates/541b01a8-94bd-4545-83a8-6ea07209b440/CoreLinux-disk1.vmdk.raw
2014-07-16 12:26:00,642;    INFO; vmcatcher_eventHndl_ON; HandleAvailablePostfix -- Creating template file /opt/vmcatcher-ON/cache/templates/541b01a8-94bd-4545-83a8-6ea07209b440.one
2014-07-16 12:26:00,780;    INFO; vmcatcher_eventHndl_ON; getImageListXML -- Getting image list: oneimage list --xml
2014-07-16 12:26:00,784;    INFO; vmcatcher_eventHndl_ON; HandleAvailablePostfix -- There is not a previous image with the same UUID in the OpenNebula infrastructure
2014-07-16 12:26:00,785;    INFO; vmcatcher_eventHndl_ON; HandleAvailablePostfix -- Instantiating template: oneimage create -d default /opt/vmcatcher-ON/cache/templates/541b01a8-94bd-4545-83a8-6ea07209b440.one | cut -d ':' -f 2
DEBUG:Events:event 'ProcessPostfix' executed 'python /var/lib/one/vmcatcher_eventHndlExpl_ON/vmcatcher_eventHndl_ON'
DEBUG:Events:stdout=
DEBUG:Events:stderr=2014-07-16 12:26:01,077;  DEBUG; vmcatcher_eventHndl_ON; main -- Processing event 'ProcessPostfix'
2014-07-16 12:26:01,077; WARNING; vmcatcher_eventHndl_ON; main -- Ignoring event 'ProcessPostfix'
*Add the following line to the oneadmin user crontab:
50 */6 * * * $HOME/vmcatcher_eventHndlExpl_ON/vmcatcher_eventHndl_ON_cron.sh &gt;&gt; /var/log/vmcatcher.log 2&gt;&amp;1
''NOTES:''
*vmcatcher_cache must be executed as oneadmin user.
*Environment variables can be used to set default values but the command line options will override any set environment options. Set these env variables for oneadmin user: VMCATCHER_RDBMS, VMCATCHER_CACHE_DIR_CACHE, VMCATCHER_CACHE_DIR_DOWNLOAD, VMCATCHER_CACHE_DIR_EXPIRE and VMCATCHER_CACHE_EVENT.
*vmcatcher_eventHndlExpl_ON generates ON image templates. These templates are available from $VMCATCHER_CACHE_DIR_CACHE/templates (templates nomenclature $VMCATCHER_EVENT_DC_IDENTIFIER.one)
*The new ON images include ''VMCATCHER_EVENT_DC_IDENTIFIER = &lt;VMCATCHER_UUID&gt;'' tag. This tag is used to identify Fedcloud VM images.
*VMcatcher expired images are set as disabled by ON. It is up to the RC to remove disabled images or assign the new ones to a specific ON group or user.
=== Registration of services in GOCDB ===
Site cloud services must be registered in [https://goc.egi.eu EGI Configuration Management Database (GOCDB)]. If you are creating a new site for your cloud services, check the [[PROC09|PROC09 Resource Centre Registration and Certification]] procedure. Services can also coexist within an existing (grid) site.
If offering OCCI interface, sites should register the following services:
* eu.egi.cloud.vm-management.occi for the OCCI endpoint offered by the site. Please note the special endpoint URL syntax described at
[[Federated_Cloud_Technology#eu.egi.cloud.vm-management.occi|GOCDB usage in FedCloud]]
* eu.egi.cloud.accounting (host should be your OCCI machine)
* eu.egi.cloud.vm-metadata.vmcatcher (also host is your OCCI machine)
* Site should also declare the following properties using the ''Site Extension Properties'' feature:
*# Max number of virtual cores for VM with parameter name: <code>cloud_max_cores4VM</code>
*# Max amount of RAM for VM with parameter name: <code>cloud_max_RAM4VM</code> using the format: value+unit, e.g. "16GB".
*# Max amount of storage that could be mounted in a VM with parameter name: <code>cloud_max_storage4VM</code> using the format: value+unit, e.g. "16GB".
Once the site services are registered in GOCDB and set as monitored they will be checked by the [https://cloudmon.egi.eu/nagios Cloud SAM instance].
== Installation Validation  ==
You can check your installation following these steps:
*Check in [https://cloudmon.egi.eu/nagios Cloudmon] that your services are listed and are passing the tests. If all the tests are OK, your installation is already in good shape.
*Check that you are publishing cloud information in your site BDII:<br><code>ldapsearch -x -h &lt;site bdii host&gt; -p 2170 -b Glue2GroupID=cloud,Glue2DomainID=&lt;your site name&gt;,o=glue</code>
*Check that all the images listed in the [https://appdb.egi.eu/store/vo/fedcloud.egi.eu AppDB&nbsp;page for fedlcoud.egi.eu VO&nbsp; ]are listed in your BDII. This sample query will return all the template IDs registered in your BDII:<br><code>ldapsearch -x -h &lt;site bdii host&gt; -p 2170 -b Glue2GroupID=cloud,Glue2DomainID=&lt;your site name&gt;,o=glue objectClass=GLUE2ApplicationEnvironment GLUE2ApplicationEnvironmentRepository</code>
*Try to start one of those images in your cloud. You can do it with `onetemplate instantiate` or OCCI commands, the result should be the same.
*Execute the [[HOWTO04_Site_Certification_Manual_tests#Check_the_functionality_of_the_cloud_elements|site certification manual tests]] against your endpoints.
*Check in the [http://accounting-devel.egi.eu/cloud.php accounting portal] that your site is listed and the values reported look consistent with the usage of your site.


[[Category:Operations_Manuals]]
[[Category:Operations_Manuals]]

Latest revision as of 16:15, 12 April 2021

Main EGI.eu operations services Support Documentation Tools Activities Performance Technology Catch-all Services Resource Allocation Security


Documentation menu: Home Manuals Procedures Training Other Contact For: VO managers Administrators



Title Cloud Resource Centre Installation Manual
Document link https://wiki.egi.eu/wiki/MAN10
Last modified 19 May 2017
Policy Group Acronym OMB
Policy Group Name Operations Management Board
Contact Group operations-support@mailman.egi.eu
Document Status DRAFT
Approved Date
Procedure Statement This manual provides information on how to set up a Resource Centre providing cloud resources in the EGI infrastructure.
Owner Owner of procedure


Warning:
The installation manual is now available at https://docs.egi.eu/. Information below just points to the relevant sections of that manual