Alert.png The wiki is deprecated and due to be decommissioned by the end of September 2022.
The content is being migrated to other supports, new updates will be ignored and lost.
If needed you can get in touch with EGI SDIS team using operations @ egi.eu.

Federated Cloud OpenStack Appliance

From EGIWiki
Revision as of 11:58, 9 August 2017 by Enolfc (talk | contribs) (→‎Host certificate)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Integration with EGI FedCloud Appliance

The EGI FedCloud Appliance packages a set of docker containers to federate a OpenStack deployment with some EGI services:

  • Information System (BDII)
  • Accounting (cASO, SSM)
  • Image management (cloudkeeper)

You can get the current version of the appliance at AppDB entry. It is available as an OVA file. You can easily extract the VMDK disk of the OVA by untaring the file, this can be also converted to any format with qemu-img:

# get image and extract VMDK
curl https://cephrgw01.ifca.es:8080/swift/v1/egi_endorsed_vas/FedCloud-Appliance.Ubuntu.16.04-2017.08.09.ova | \
       tar x FedCloud-Appliance.Ubuntu.16.04-2017.08.09-disk001.vmdk
# convert to qcow2
qemu-img convert -O qcow2 FedCloud-Appliance.Ubuntu.16.04-2017.08.09-disk001.vmdk fedcloud-appliance.qcow2

Pre-requisites

The appliance works by querying the public APIs of an existing OpenStack installation. It assumes Keystone-VOMS is installed at that OpenStack and the voms.json file is properly configured.

The appliance uses the following OpenStack APIs:

  • nova, for getting images and flavors available and to get usage information
  • keystone, for authentication and for getting the available tenants
  • glance, for querying, uploading and removing VM images.

Not all services need to be accessed with the same credentials. Each component is individually configured, you can use different accounts if needed for each of them.

Host certificate

A host certificate is needed to send the accounting information to the accounting repository. DN of the host certificate must be registered in GOCDB service type eu.egi.cloud.accounting (see the registration section for more information).

The host certificate and key in PEM format are expected in /etc/grid-security/hostcert.pem and /etc/grid-security/hostkey.pem respectively.

Disk space

VM Image replication requires large disk space (~ 100 GB) for storing the downloaded images. By default these are stored at /image_data. You can mount a volume at that location.

Public IP / accessible name

The appliance should be accessible by the EGI Information System. EGI information system will check GOCDB for the exact location of your appliance (see the registration section below for more information).

EGI Accounting (cASO/SSM)

There are two different processes handling the accounting integration:

  • cASO, which connects to the OpenStack deployment to get the usage information, and,
  • ssmsend, which sends that usage information to the central EGI accounting repository.

They are run by cron every hour (cASO) and every six hours (ssmsend).

cASO configuration is stored at /etc/caso/caso.conf. Most default values are ok, but you must set:

  • site_name (line 12)
  • projects (line 20)
  • credentials to access the accounting data (lines 28-47, more options also available). Check the cASO documentation for the expected permissions of the user configured here.

The cron job will use the voms mapping file at /etc/voms.json.

cASO will write records to /var/spool/apel where ssmsend will take them.

SSM configuration is available at /etc/apel. Defaults should be ok for most cases. The cron file uses /etc/grid-security for the CAs and the host certificate and private keys (in /etc/grid-security/hostcert.pem and /etc/grid-security/hostkey.pem).

Running the services

Both caso and ssmsend are run via cron scripts. They are located at /etc/cron.d/caso and /etc/crond.d/ssmsend respectively. For convenience there are also two scripts /usr/loca/bin/caso-extract.sh and /usr/local/bin/ssm-send.sh that run the docker container with the proper volumes.

EGI Information System (BDII)

Information discovery provides a real-time view about the actual images and flavors available at the OpenStack for the federation users. It has two components:

  • Resource-Level BDII: which queries the OpenStack deployment to get the information to publish
  • Site-Level BDII: gathers information from several resource-level BDIIs (in this case only 1) and makes it publicly available for the EGI information system.

Resource-level BDII

This is provided by container egifedcloud/cloudbdii. You need to configure:

  • /etc/cloud-info-provider/openstack.rc, with the credentials to query your OpenStack. The user configured just needs to be able to access the lists of images and flavors.
  • /etc/cloud-info-provider/openstack.yaml, this file includes the static information of your deployment. Make sure to set the SITE-NAME as defined in GOCDB.

Site-level BDII

The egifedcloud/sitebdii container runs this process. Configuration files:

  • /etc/sitebdii/glite-info-site-defaults.conf. Set here the name of your site (as defined in GOCDB) and the public hostname where the appliance will be available.
  • /etc/sitebdii/site.cfg. Include here basic information on your site.

Running the services

There is a bdii.service unit for systemd available in the appliance. This leverages docker-compose for running the containers. You can start the service with:

systemctl start bdii

Check the status with:

systemctl status bdii

And stop with:

systemctl stop bdii

You should be able to get the BDII information with an LDAP client, e.g.:

ldapsearch -x -p 2170 -h <yourVM.hostname.domain.com> -b o=glue

EGI Image Management (cloudkeeper)

The appliance provides VMI replication with cloudkeeper. Every 4 hours, the appliance will perform the following actions:

  • download the configured lists in /etc/cloudkeeper/image-lists.conf and verify its signature
  • check any changes in the lists and download new images
  • synchronise this information to the configured glance endpoint

cloudkeeper has two components:

  • fronted dealing the with image lists and downloading the needed images
  • backend dealing with your glance catalogue

First you need to configure and start the backend. Edit /etc/cloudkeeper/cloudkeeper-os.conf and add the authentication parameters from line 117 to 136.

Then add as many image lists (one per line) as you would like to subscribe to /etc/cloudkeeper/image-lists.conf. Use URLs with your AppDB token for authentication.

Running the services

cloudkeeper-os should run permanently, there is a cloudkeeper-os.service for systemd in the appliance. Manage as usual:

systemctl <start|stop|status> cloudkeeper-os

cloudkeeper core is run every 4 hours with a cron script.

Upgrading the appliance

From 20160403 to 2017.08.09

There are several major changes between those versions, namely:

  • atrope has been deprecated and cloudkeeper is used instead. The configuration cannot be reused directly and the new services need to be configured as described above
  • caso is upgraded to version 1.1.1, the configuration file has some incompatible changes.
  • A new bdii.service is available for managing the process is available.