Alert.png The wiki is deprecated and due to be decommissioned by the end of September 2022.
The content is being migrated to other supports, new updates will be ignored and lost.
If needed you can get in touch with EGI SDIS team using operations @

Difference between revisions of "Federated Cloud IaaS Orchestration"

From EGIWiki
Jump to navigation Jump to search
(No difference)

Revision as of 10:49, 20 June 2018

Overview For users For resource providers Infrastructure status Site-specific configuration Architecture

IaaS Provisioning Tools

IaaS Provisioning tools automate the deployment of resources on cloud services. These tools normally use some sort of domain specific language or script that defines your application deployment process, that is translated to a set of tasks that interact with the cloud services to start Virtual Machines, Storage, Networks and other kind of resources and services where your application will be installed and run. There are several tools that are able to manage resources of the EGI Federated Cloud, each of them with different design approaches that may suit your application needs. The following table summarises the current supported tools:

Tool Supported EGI Cloud Interfaces Other Cloud interfaces supported Infrastructure description Deployment Web GUI CLI
IM OpenStack, OpenNebula, OCCI AWS EC2, GCP, Azure, Docker, Kubernetes, FogBow, T-Systems OTC, libvirt RADL Server Yes Yes
Terraform OpenStack, OCCI Check Terraform providers, also allows plugins Terraform configurations Client-side tool No Yes
OCCOPUS OCCI, OpenStack AWS EC2, CloudBroker, Docker, CloudSigma Occopus infrastructure description Client or server No Yes


UPV offers a public instance of IM, you can register a new account. Documentation is available at [1]

Once you have an account, you can either interact with IM via the web GUI or using the command line tool. Installation of the command line can be easily done with pip, a virtualenv (and virtualenvwrapper is recommended for the installation):

# create a "im" virtualenv
$ mkvirtualenv im
# now we are already in the im virtualenv, install im
(im)$ pip install im_client

Whenever you want to use the client tool, just enter the virtualenv and it will be available on your path

$ workon im
(im) $ which


IM uses a file with the credentials used to access the IM server and the providers. See below an example with two OCCI providers:

$ cat ~/.im_client_auth
id = im; type = InfrastructureManager; username = <your_im_user>; password = <your_im_password>
id = occi_bari; type = OCCI; proxy = file(/tmp/x509up_u1000); host =
id = occi_cesnet; type = OCCI; proxy = file(/tmp/x509up_u1000); host =

You can get the URLs for the OCCI endpoints at AppDB or GOCDB, check the [Federated_Cloud_APIs_and_SDKs#Discovery_of_resources Discovery of Resources page] for more information.

Commands issued need to have the URL of the server and the authentication file as parameters like this

(im) $ -u -a ~/.im_client_auth <command> <command options>

For example, listing your deployed infrastructures

(im) $ -u -a ~/.im_client_auth list
Connected with:
Infrastructure IDs:


IM native language for specifying the deployments is called RADL (Resource and Application Description Language), it has sections to specify the VMs to be deployed and the configuration to be applied on them with tight integration with Ansible. The following example creates a VM on RECAS-BARI resource provider (notice the disk.0.image.url, that contails the URL of the OCCI endpoint followed by the VM image id) of type 7 (these ids can be obtained via AppDB or BDII) using the grycap.swarm module from Ansible Galaxy.

network public (outbound = 'yes' )

system master (
instance_type = '7' and
net_interface.0.connection = 'public' and
net_interface.0.dns_name = 'master' and = 'linux' and
disk.0.image.url= [''] and
disk.0.os.credentials.username = 'cloudadm' and
disk.0.applications contains (name='ansible.modules.grycap.swarm')

configure master (
 - roles:
    - { role: 'grycap.swarm' }

deploy master 1

For more information, check the RADL documentation


Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions. Terraform is installed as a single binary and can be extended with plugins for additional functionality. We have created a plugin for Terraform that allows interacting with OpenStack resources using EGI AAI based on X.509 proxies.


The EGI OpenStack plugin for Terraform provides the same functionality as the native built-in OpenStack provider but adds the capacity of authenticating with X.509 VOMS proxies. In order to use it, you will need to:

  1. Download the binary for your Terraform version. Let us know if you need a specific version or new platforms to be provided.
  2. Override the builtin OpenStack driver of Terraform in your ~/.terraformrc file including these lines:
    providers {
    openstack = "terraform-provider-egi-openstack"
  3. Add in your terraform files the configuration for the OpenStack resources with the new voms attribute and your proxy in the cert and key options:
    provider "openstack" {
    auth_url = ""
    tenant_id = "8f5c5d426a164478b1c659965a0a3dfd"
    cert = "/tmp/x509up_u1000"
    key = "/tmp/x509up_u1000"
    voms = true

You can get the tenant_id for a specific resource provider as described in OpenStack CLI guide.

Terraform configuration

Below is an example of a Terraform configuration that creates a single VM with public IP at BIFI resource provider:

provider "openstack" {
    auth_url = ""
    tenant_id = "8f5c5d426a164448b1c65a965aea3dfd"
    cert = "/tmp/x509up_u1000"
    key = "/tmp/x509up_u1000"
    cacert_file = "/home/ubuntu/.virtualenvs/voms/lib/python2.7/site-packages/requests/cacert.pem"
    voms = true

resource "openstack_compute_keypair_v2" "mykey" {
  name = "mykey"
  public_key = "${file("~/.ssh/")}"

resource "openstack_compute_floatingip_v2" "floatip_1" {
  pool = "provider"

resource "openstack_compute_instance_v2" "master" {
  name = "master"
  image_id = "befecd08-78c2-4177-bbf5-4afd462f5d09"
  flavor_id = "308bc2b2-1e1e-4af9-a98f-cac76b6ce084"
  key_pair = "${}"
  security_groups = ["default"]

  metadata = {
    role = "master"

  network {
    uuid = "1c2a07d8-3d9e-4863-90a6-341893a72f0a"
    floating_ip = "${openstack_compute_floatingip_v2.floatip_1.address}"
    access_network = true

Ids can be obtained either via AppDB or via interacting with the native OpenStack API against the site. Complete configuration options are available at OpenStack provider documentation.


There is an OCCI plugin in development for Terraform. This is still work in progress and is available as source at this github repo. Check the README file for more information.


Occopus installation guide details the needed steps to get Occopus installed on your machine. Any modern linux with python support should work.


Authentication in Occopus is configured at the ~/.occopus/auth_data.yaml file, below you can find a working example for OpenStack and OCCI resource providers using your X.509 VOMS proxy:

        type: nova
            type: voms
            proxy: /tmp/x509up_u1000
        type: occi
            proxy: /tmp/x509up_u1000

Infrastructure example

Occopus provides several examples at their documentation. These can be found at the Tutorials on resource plugins that will guide you for all the steps required to run an example.


SlipStream requires a server instance to be available for its usage. Currently there is no such instance available, get in touch with us at if you need information on using SlipStream.

Application Brokers

An application broker abstracts the Cloud infrastructure and frees you from the need to control the deployment of the application. Once you have integrated your application into the broker, it will take care of starting the virtual servers according to the workload, configure them and dispatch the user jobs. The effort to integrate your application into the application broker depends on the application and the broker itself. The most common process, anyway, is to use basic OS images with contextualization or a custom OS image, to which you add the application broker worker node routines.

When your application performs parallel processing, using an application broker may speed-up the application execution, since the broker will take the task to submit the processing to different servers, using a wider number of resources.

Each application broker is usually suited to a specific use case, they may offer a sort of PaaS or SaaS environment for the applications (ex. Grid clusters, Hadoop clusters, etc...) or integrate applications via wrappers written in Java or other programming languages.

The following table shows the application broker solutions currently supported by the EGI Federated Cloud. For information about these application broker solutions, please, visit the web sites linked in the table.

Name Supported EGI Cloud Interfaces Main Features Step by step guide
Catania Science Gateway Framework OCCI (via JSAGA rOCCI adaptor) The CSGF allows users to execute applications on the EGI Federated Cloud through web portals/SGs. The Science Gateways based on CSGF provide users with intuitive web interface to execute applications on the Cloud as jobs and to manage these jobs during their running (check the status and download the output). The SGs takes care of starting the VMs on the EGI Federated Cloud, transfer the needed files (e.g. executable, input files, etc.), stop the VMs and download the output in behalf of the user.

If you need any assistance or user support, contact the CSGF team at

CSGF as Application Broker How To
COMPSs OCCI Automatic parallelization and orchestration of applications and services, elasticity, auto scaling COMPSs How To
VMDIRAC OCCI Accounting, monitoring, brokering, scheduling, automatic contextualization with cloudinit VMDIRAC as Application Broker How To
WSPGRADE OCCI WS-PGRADE/gUSE is a job submission tool to clouds. If you have got a job to run in the cloud you specify only the executable, its input parameters and the name of the result files via a simple user interface as well as the cloud of EGI FedCloud where you want to run your job and then your job will run in the selected cloud. The result file will be available for download from WS-PGRADE/gUSE. To learn the use of this feature is about 10 minutes. (See the demo at the EGI CF in Helsinki.)

WS-PGRADE/gUSE is also a gateway to clouds that helps to port your grid application to cloud with minimal effort. If you have a workflow application that used to work on grid resources you can easily reconfigure the nodes of the workflow to run in the EGI FedCloud. See Chapter 18 on "Job Submission to Cloud by Direct Cloud Access Solution" at in the WS-PGRADE/gUSE user manual at sourceforge. To learn this feature of WS-PGRADE/gUSE is about 1 hour.

Finally, by WS-PGRADE/gUSE you can develop new workflows that can be executed in the EGI FedCloud or in a mixed grid/cloud infrastructure. See the WS-PGRADE/gUSE user manual at sourceforge in the URL mentioned above. To learn this feature of WS-PGRADE/gUSE is about 1 day.

If you need any assistance or user support, contact the WS-PGRADE/gUSE team at