Alert.png The wiki is deprecated and due to be decommissioned by the end of September 2022.
The content is being migrated to other supports, new updates will be ignored and lost.
If needed you can get in touch with EGI SDIS team using operations @

Difference between revisions of "Federated Cloud IaaS Orchestration"

From EGIWiki
Jump to navigation Jump to search
Line 4: Line 4:
|- style="background:lightgray;"
|- style="background:lightgray;"
! style="border-bottom:1px solid black;" | Tool  
! style="border-bottom:1px solid black;" | Tool  
! style="border-bottom:1px solid black;" | Support for EGI AAI
! style="border-bottom:1px solid black;" | Supported EGI Cloud Interfaces
! style="border-bottom:1px solid black;" | Infrastructure description  
! style="border-bottom:1px solid black;" | Infrastructure description  
! style="border-bottom:1px solid black;" | Deployment
! style="border-bottom:1px solid black;" | Deployment

Revision as of 16:32, 16 March 2017

IaaS Orchestration automates the deployment of resources on cloud services. Orchestrators normally use some sort of domain specific language or script that defines your application deployment process, that is translated to a set of tasks that interact with the cloud services to start Virtual Machines, Storage, Networks and other kind of resources and services where your application will be installed and run. There are several Orchestration tools that are able to manage resources of the EGI Federated Cloud, each of them with different design approaches that may suit your application needs. The following table summarises the current supported tools:

Tool Supported EGI Cloud Interfaces Infrastructure description Deployment Web GUI CLI
IM OCCI RADL Server Yes Yes
Terraform OpenStack, OCCI (in progress) Terraform configurations Client-side tool No Yes
OCCOPUS OCCI, OpenStack Occopus infrastructure description Client or server No Yes
SlipStream OCCI SlipStream Applications Server Yes Yes (application specification is performed on the web GUI)


UPV offers a public instance of IM, you can register a new account. Documentation is available at [1]

Once you have an account, you can either interact with IM via the web GUI or using the command line tool. Installation of the command line can be easily done with pip, a virtualenv (and virtualenvwrapper is recommended for the installation):

# create a "im" virtualenv
$ mkvirtualenv im
# now we are already in the im virtualenv, install im
(im)$ pip install im_client

Whenever you want to use the client tool, just enter the virtualenv and it will be available on your path

$ workon im
(im) $ which


IM uses a file with the credentials used to access the IM server and the providers. See below an example with two OCCI providers:

$ cat ~/.im_client_auth
id = im; type = InfrastructureManager; username = <your_im_user>; password = <your_im_password>
id = occi_bari; type = OCCI; proxy = file(/tmp/x509up_u1000); host =
id = occi_cesnet; type = OCCI; proxy = file(/tmp/x509up_u1000); host =

You can get the URLs for the OCCI endpoints at AppDB or GOCDB, check the [Federated_Cloud_APIs_and_SDKs#Discovery_of_resources Discovery of Resources page] for more information.

Commands issued need to have the URL of the server and the authentication file as parameters like this

(im) $ -u -a ~/.im_client_auth <command> <command options>

For example, listing your deployed infrastructures

(im) $ -u -a ~/.im_client_auth list
Connected with:
Infrastructure IDs:


IM native language for specifying the deployments is called RADL (Resource and Application Description Language), it has sections to specify the VMs to be deployed and the configuration to be applied on them with tight integration with Ansible. The following example creates a VM on RECAS-BARI resource provider (notice the disk.0.image.url, that contails the URL of the OCCI endpoint followed by the VM image id) of type 7 (these ids can be obtained via AppDB or BDII) using the grycap.swarm module from Ansible Galaxy.

network public (outbound = 'yes' )

system master (
instance_type = '7' and
net_interface.0.connection = 'public' and
net_interface.0.dns_name = 'master' and = 'linux' and
disk.0.image.url= [''] and
disk.0.os.credentials.username = 'cloudadm' and
disk.0.applications contains (name='ansible.modules.grycap.swarm')

configure master (
 - roles:
    - { role: 'grycap.swarm' }

deploy master 1

For more information, check the RADL documentation


Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions. Terraform is installed as a single binary and can be extended with plugins for additional functionality. We have created a plugin for Terraform that allows interacting with OpenStack resources using EGI AAI based on X.509 proxies.


The EGI OpenStack plugin for Terraform provides the same functionality as the native built-in OpenStack provider but adds the capacity of authenticating with X.509 VOMS proxies. In order to use it, you will need to:

  1. Download the binary for your Terraform version. Let us know if you need a specific version or new platforms to be provided.
  2. Override the builtin OpenStack driver of Terraform in your ~/.terraformrc file including these lines:
    providers {
    openstack = "terraform-provider-egi-openstack"
  3. Add in your terraform files the configuration for the OpenStack resources with the new voms attribute and your proxy in the cert and key options:
    provider "openstack" {
    auth_url = ""
    tenant_id = "8f5c5d426a164478b1c659965a0a3dfd"
    cert = "/tmp/x509up_u1000"
    key = "/tmp/x509up_u1000"
    voms = true

You can get the tenant_id for a specific resource provider as described in OpenStack CLI guide.

Terraform configuration

Below is an example of a Terraform configuration that creates a single VM with public IP at BIFI resource provider:

provider "openstack" {
    auth_url = ""
    tenant_id = "8f5c5d426a164448b1c65a965aea3dfd"
    cert = "/tmp/x509up_u1000"
    key = "/tmp/x509up_u1000"
    cacert_file = "/home/ubuntu/.virtualenvs/voms/lib/python2.7/site-packages/requests/cacert.pem"
    voms = true

resource "openstack_compute_keypair_v2" "mykey" {
  name = "mykey"
  public_key = "${file("~/.ssh/")}"

resource "openstack_compute_floatingip_v2" "floatip_1" {
  pool = "provider"

resource "openstack_compute_instance_v2" "master" {
  name = "master"
  image_id = "befecd08-78c2-4177-bbf5-4afd462f5d09"
  flavor_id = "308bc2b2-1e1e-4af9-a98f-cac76b6ce084"
  key_pair = "${}"
  security_groups = ["default"]

  metadata = {
    role = "master"

  network {
    uuid = "1c2a07d8-3d9e-4863-90a6-341893a72f0a"
    floating_ip = "${openstack_compute_floatingip_v2.floatip_1.address}"
    access_network = true

Ids can be obtained either via AppDB or via interacting with the native OpenStack API against the site. Complete configuration options are available at OpenStack provider documentation.


There is an OCCI plugin in development for Terraform. This is still work in progress and is available as source at this github repo. Check the README file for more information.


Occopus installation guide details the needed steps to get Occopus installed on your machine. Any modern linux with python support should work.


Authentication in Occopus is configured at the ~/.occopus/auth_data.yaml file, below you can find a working example for OpenStack and OCCI resource providers using your X.509 VOMS proxy:

        type: nova
            type: voms
            proxy: /tmp/x509up_u1000
        type: occi
            proxy: /tmp/x509up_u1000

Infrastructure example

Occopus provides several examples at their documentation. These can be found at the Tutorials on resource plugins that will guide you for all the steps required to run an example.


SlipStream requires a server instance to be available for its usage. Currently there is no such instance available, get in touch with us at if you need information on using SlipStream.