Difference between revisions of "HOWTO15 How to configure the Federated Cloud BDII"

From EGIWiki
Jump to: navigation, search
Line 3: Line 3:
 
= Purpose =
 
= Purpose =
 
This page provides instructions on how to configure the Federated Cloud Production BDII.
 
This page provides instructions on how to configure the Federated Cloud Production BDII.
 +
 +
= Installation guide =
 +
 +
== Pre-requisites ==
 +
This guide has the following pre-requisites:
 +
* Site-BDII, with support to GLUE2 schema specifications. If you do not have already a site BDII installed from EMI or UMD you can follow the installation guide below
 +
* Python 2.2.x
 +
 +
== Install Site-BDII ==
 +
If you have already a production BDII (eg. for existing Grid or storage resources), you can skip this step. Otherwise, here is a quick guide on how to install and configure a site BDII:
 +
 +
# Install UMD repository according to the instructions here: http://repository.egi.eu/category/umd_releases/distribution/umd-3/
 +
# Install Site-BDII packages: <code>yum install bdii bdii-config-site</code>
 +
# Edit the file <code>/etc/glite-info-static/site/site.cfg</code> with your site information
 +
# Start the BDII service: <code>service bdii start; chkconfig bdii on</code>
 +
# Configure your GOCDB site information the 'GIIS URL' with the address of your site BDII and the base schema, (eg: ldap://prisma-cloud.ba.infn.it:2170/GLUE2DomainID=PRISMA-INFN-BARI,o=glue )
 +
 +
== Install the cloud resource provider script ==
 +
For filling the BDII with the cloud resource information, you need to install the cloud resource provider script.
 +
 +
=== For RHEL/CentOS/ScientifcLinux 6.x ===
 +
#Install EPEL (follow instructions [https://fedoraproject.org/wiki/EPEL here])
 +
#Install cloud provider script via RPM
 +
yum localinstall http://github.com/EGI-FCTF/BDIIscripts/raw/master/rpm/cloud-info-provider-service-0.2-1.el6.noarch.rpm
 +
 +
=== For other OSes (Install from sources) ===
 +
 +
git clone http://github.com/EGI-FCTF/BDIIscripts/
 +
cd BDIIscripts
 +
pip install -e .
  
 
= Configuration guide =
 
= Configuration guide =
  
== For sites with existing production BDII ==
+
== Configure middleware backend ==
If your site has already a production BDII (eg. for existing Grid resources), you need to add to this production BDII the cloud information. This information is in a separated dedicated three, so it will not interfere with your existing data. The production BDII at your site may be implemented using the site BDII software package released via UMD/EMI or with a bare OpenLDAP server. To add the production cloud information, you need to:
+
The cloud provider script information is retrieved partially from a static configuration file and partially from the cloud middleware directly. Thus, the configuration depends on which middleware you have installed.
 +
 
 +
=== OpenNebula ===
 +
''NOTE:'' This is a pure OpenNebula installation. If you have installed OpenNebula via rOCCI, refer to the [[#OpenNebula via rOCCI|OpenNebula via rOCCI]] guide.
  
=== If your site has a BDII from EMI/UMD ===
+
* Copy the sample provider configuration file to the default software configuration file
# Be sure that your site BDII supports GLUE2.0 schema and publish all the user relevant information on GLUE2.0 . Be sure also that the GOCDB GIIS entry of your production site points to the GLUE2.0 schema, (eg: <code>ldap://prisma-cloud.ba.infn.it:2170/GLUE2DomainID=PRISMA-INFN-BARI,o=glue </code>). If that is not the case, update your BDII to GLUE2.0 and update the GIIS URL accordingly.
+
cp /opt/cloud-info-provider/etc/sample.opennebula.yaml /opt/cloud-info-provider/etc/bdii.yaml
# Download and configure the Python script in a directory of your choice, as described into the [[#Generate the cloud information|following chapter]]. Set the <code>provider['full_bdii_schema']</code> variable to 'false' .
+
* Edit the <code>/opt/cloud-info-provider/etc/bdii.yaml</code> configuration, setting up the site permanent information and the OpenNebula connection information. Most of the information to be provider is self explanatory or specified in the comments. Below there is a set of notes who can be relevant during the configuration.
# Run the Python script to produce the cloud LDIF
 
# Delete old cloud data (if any)(eg. <code> ldapdelete -H ldap://full.hostname:2170 'GLUE2GroupID=cloud,GLUE2DomainID=SZTAKI,o=glue' -D 'your LDAP admin DN' -W </code>)
 
# Add new cloud data (eg. <code>ldapadd -f cloud-ldif.ldif -H ldap://full.hostname:2170 -D 'your LDAP admin DN' -W</code>)
 
# Check that the cloud data has been successfully added without errors (eg. <code>ldapsearch -x -H ldap://full.hostname:2170 -b 'GLUE2GroupID=cloud,GLUE2DomainID=SZTAKI,o=glue'</code>)
 
# Move the cloud generation script or link it into the local BDII_PROVIDER_DIR (as default: <code>/var/lib/bdii/gip/provider/</code>. The BDII_PROVIDER_DIR folder path is set into <code>/etc/bdii/bdii.conf</code>)
 
  
=== If you site has a bare OpenLDAP server ===
+
''Configuration notes:''
# Be sure that your LDAP supports GLUE2.0 schema and publish all the user relevant information on GLUE2.0 . Be sure also that the GOCDB GIIS entry of your production site points to the GLUE2.0 schema, (eg: <code>ldap://prisma-cloud.ba.infn.it:2170/GLUE2DomainID=PRISMA-INFN-BARI,o=glue </code>). If that is not the case, update your BDII to GLUE2.0 and update the GIIS URL accordingly.
+
* Keep always full_bdii_ldif set to False
# Run the Python script to produce the cloud LDIF, setting the <code>provider['full_bdii_schema']</code> variable to 'false' . More information on how to run the schema is in the [[#Generate the cloud information|following chapter]].
+
* You need to specify connection parameters to the OpenNebula XML-RPC interface. ''on_auth'' should contain the authorization parameters for an existing user with full read permissions on the image disks. If the user has been created with the ''core'' driver, this parameter shall be set to ''<username>:<password>''. ''on_rpcxml_endpoint'' shall contain the address of the RPCv2 endpoint. Usually it is ''http://myipaddress:2633/RPC2'' . If not on a secure network, it is suggested to provide this interface via https, since the on_auth parameter will be sent in clear text to the server.
# Delete old cloud data (if any)(eg. <code> $ ldapdelete -H ldap://full.hostname:2170 'GLUE2GroupID=cloud,GLUE2DomainID=SZTAKI,o=glue' -D 'your LDAP admin DN' -W </code>)
+
* ''site'' parameters can be left commented, since they will be automatically retreived from the ''/etc/glite-info-static/site/site.cfg'' configuration file
# Add new cloud data (eg. <code>ldapadd -f cloud-ldif.ldif -H ldap://full.hostname:2170 -D 'your LDAP admin DN' -W</code>)
+
* Compute templates can be ignored, since OpenNebula has no concept of resource flavours
 +
* Object storage services (STorage-as-a-service) can be set statically. As they are not provided by OpenNebula, they can be ignored or set to the ones provided by other middleware.
  
== For sites without existing production BDII ==
+
=== OpenNebula via rOCCI ===
If your site does not have a site BDII, you can setup a new one simply by setting up a new LDAP server. To do so:
 
  
# Install OpenLDAP packages (or any LDAP server you like) and configure it using the following configuration files:
+
* Copy the sample provider configuration file to the default software configuration file
#* Configuration file for the LDAP server. [https://raw.githubusercontent.com/EGI-FCTF/BDIIscripts/master/old/slapd.conf (slapd.conf) Download here].
+
cp /opt/cloud-info-provider/etc/sample.opennebularocci.yaml /opt/cloud-info-provider/etc/bdii.yaml
#* GLUE20.schema file, containing the GLUE20 LDAP definitions. [https://raw.githubusercontent.com/EGI-FCTF/BDIIscripts/master/old/GLUE20.schema (GLUE20.schema) download here].
+
* Edit the ''/opt/cloud-info-provider/etc/bdii.yaml'' configuration, setting up the site permanent information and the OpenStack connection information. Most of the information to be provider is self explanatory or specified in the comments. Below there is a set of notes who can be relevant during the configuration.
# Follow the [[#Generate the cloud information|Generate the cloud information]] section to produce the cloud LDIF, setting the <code>provider['full_bdii_schema']</code> variable to 'true' . More information on how to run the schema is in the following chapter
 
# Delete existing LDAP database directory, as specified in the slapd.conf file (if not already empty) (eg: <code>rm -rf /var/db/openldap/openldap-data/</code>) and create a new empty one (eg: <code>mkdir /var/db/openldap/openldap-data/; chown ldap:ldap -R /var/db/openldap/openldap-data/</code>)
 
# Load the ldap data (eg: <code> $ slapadd -f /etc/openldap/slapd.conf -l cloud-ldif.ldif </code>)
 
# Start the LDAP server (eg: <code>$ slapd -f /etc/openldap/slapd.conf -h ldap://full.hostname:2170  -d 0  > file-name 2>&1 &</code> or <code>service slapd start</code>)
 
# Configure your [https://goc.egi.eu/ GOCDB] site information the 'GIIS URL' with the address of your site BDII and the base schema, (eg: <code> ldap://prisma-cloud.ba.infn.it:2170/GLUE2DomainID=PRISMA-INFN-BARI,o=glue </code>)
 
  
= Refresh the data =
+
''Configuration notes:''
Data contained in the LDAP server is static, thus refresh or update need to be performed by hand. If you need to perform major updates, you can re-generate the entire cloud schema (using the Data publish script, as reported in the chapter below) and then re-import it. To do it, you can follow again the instruction in the [[#Configuration_Guide|Configuration Guide]] above.
+
* Keep always full_bdii_ldif set to False
 +
* You need to specify connection parameters to the OpenStack interface. ''on_auth'' should contain the authorization parameters for an existing user with full read permissions on the image disks. If the user has been created with the ''core'' driver, this parameter shall be set to ''<username>:<password>''. ''on_rpcxml_endpoint'' shall contain the address of the RPCv2 endpoint. Usually it is ''http://myipaddress:2633/RPC2'' . If not on a secure network, it is suggested to provide this interface via https, since the on_auth parameter will be sent in clear text to the server.
 +
* ''site'' parameters can be left commented, since they will be automatically retreived from the ''/etc/glite-info-static/site/site.cfg'' configuration file
 +
* Compute templates can be gathered in two ways: directly from rOCCI configuration, by setting up the ''template_dir'' parameter to the rOCCI configuration folder or manually by placing them in the configuration file. One option does not preclude the other and the resulting templates will be the merge of the two.
 +
* Images are retrieved from the OpenNebula templates, to mimic the behavior of rOCCI.
 +
* Object storage services (STorage-as-a-service) can be set statically. As they are not provided by OpenNebula, they can be ignored or set to the ones provided by other middleware.
  
= Generate the cloud information =
+
=== OpenStack ===
Cloud information can be generated using the following Python script ([https://raw.githubusercontent.com/EGI-FCTF/BDIIscripts/master/old/cloud-info-provider-site-entry-glue2 (cloud-info-provider-site-entry-glue2) Download here])
 
  
The script must be edited with your site and cloud services information before its execution, as described in the following tables. To run the script, just execute:
+
* Install the Nova Python SDK (needed by the OpenStack driver). You should have packages for that. In RHEL, they are provided by EPEL and you can install them via
  $ chmod +x cloud-info-provider-site-entry-glue2
+
yum install -y python-novaclient
  $ ./cloud-info-provider-site-entry-glue2 > cloud-ldif.ldif
 
  
===General interface information===
+
* Copy the sample provider configuration file to the default software configuration file
{| border="1" cellspacing="5" cellpadding="5" class="wikitable" style="border-collapse: collapse; border:1px solid black;"
+
cp /opt/cloud-info-provider/etc/sample.openstack.yaml /opt/cloud-info-provider/etc/bdii.yaml
!Variable name
+
* Edit the <code>/opt/cloud-info-provider/etc/bdii.yaml</code> configuration, setting up the site permanent information and the OpenStack connection information. Most of the information to be provider is self explanatory or specified in the file comments. Below there is a set of notes who can be relevant during the configuration.
!Description
 
!Examples
 
|-
 
|interface['IaaS_api']
 
|API standard for the IaaS management interface
 
|OCCI
 
|-
 
|interface['IaaS_api_version']
 
|API standard version for the IaaS management interface
 
|1.1
 
|-
 
|interface['IaaS_api_endpoint_technology']
 
|API endpoint technology for the IaaS management interface
 
|REST
 
|-
 
|interface['IaaS_api_authorization_method']
 
|API AAI for the IaaS management interface
 
|X509-VOMS
 
|-
 
|interface['STaaS_api']
 
|API standard for the STaaS management interface
 
|CDMI
 
|-
 
|interface['STaaS_api_version']
 
|API standard version for the STaaS management interface
 
|1.0.1
 
|-
 
|interface['STaaS_api_endpoint_technolog']
 
|API endpoint technology for the STaaS management interface
 
|REST
 
|-
 
|interface['STaaS_api_authorization_method']
 
|API AAI for the STaaS management interface
 
|X509-VOMS
 
|}
 
  
===General cloud site information===
+
''Configuration notes:''
{| border="1" cellspacing="5" cellpadding="5" class="wikitable" style="border-collapse: collapse; border:1px solid black;"
+
* Keep always full_bdii_ldif set to False
!Variable name
+
* You need to specify connection parameters to the OpenStack Auth service (Keystone). OpenStack will then get the API endpoints from Keystone.
!Description
+
* Be sure that keystone contains the OCCI endpoint, otherwise it will not be published by the BDII. You can check this via the command <code>keystone service-list</code>. To create a new service and endpoint, you can run <code>keystone service-create --name nova --type occi --description 'Nova OCCI Service'</code> and then <code>keystone endpoint-create --service_id 8e6de5d0d7624584bed6bec9bef7c9e0 --region RegionOne --publicurl http://$HOSTNAME:8787/ --internalurl http://$HOSTNAME:8787/ --adminurl http://$HOSTNAME:8787/</code> where the ''service_id'' is the one obtained from <code>keystone service-list</code>
!Examples
+
* In production environments, it is recommended to set the ''insecure'' parameter in the options to ''False'' and uncomment the ''os_cacert''
|-
+
* ''site'' parameters can be left commented, since they will be automatically retreived from the ''/etc/glite-info-static/site/site.cfg'' configuration file
|provider['site_name']
 
|This is the name of the cloud site, if it was a Grid site it would have been the GOCDB name.
 
|Self explanatory
 
|-
 
|provider['production_level']
 
|State of the site certification (production level)
 
|production
 
|-
 
|provider['full_bdii_schema']
 
|Flag for production full BDII schema (set to true, for new BDII) or only cloud resources (set to false, for existing BDII)
 
|true
 
|-
 
|provider['www']
 
|Address of the information web site of the resource provider, or the cloud service. It should contain a full URL (eg. with http:// prefix)
 
|http://recas-pon.ba.infn.it/
 
|-
 
|provider['Country']
 
|Country where the cloud infrastructure is located (in [http://en.wikipedia.org/wiki/ISO_3166-1_alpha-2#Officially_assigned_code_elements ISO 3166-1 Alpha2 standard])
 
|IT
 
|-
 
|provider['site_longitude']
 
|The longitude of the main site location (in the XX.XXXX format)
 
|16.8891
 
|-
 
|provider['site_latitude']
 
|The latitude of the main site location
 
|41.1123
 
|-
 
|provider['affiliated_ngi']
 
|The name of the affiliated NGI
 
|NGI_IT
 
|-
 
|provider['user_support_contact']
 
|Site user support contact mail address
 
|admin@cloudsite.org
 
|-
 
|provider['general_contact']
 
|Site general contact mail address
 
|admin@cloudsite.org
 
|-
 
|provider['sysadmin_contact']
 
|Site system administration main contact mail address
 
|admin@cloudsite.org
 
|-
 
|provider['security_contact']
 
|Site security contact mail address
 
|admin@cloudsite.org
 
|-
 
|provider['sysadmin_contact']
 
|Site user support contact mail address
 
|admin@cloudsite.org
 
|-
 
|provider['site_bdii_host']
 
|Local site BDII host
 
|site-bdii.mysite.it
 
|-
 
|provider['site_bdii_port']
 
|Local site BDII port
 
|2170
 
|-
 
|provider['site_total_cpu_cores']
 
|The total number of CPU cores provided by the site
 
|300
 
|-
 
|provider['site_total_ram_gb']
 
|The total RAM provided by the site (in GB)
 
|600
 
|-
 
|provider['site_total_storage_gb']
 
|The total number of storage provided by the site (in GB)
 
|1024
 
|-
 
|provider['iaas_middleware']
 
|The name of the IaaS middleware
 
|OpenStack Nova
 
|-
 
|provider['iaas_middleware_version']
 
|The version of the IaaS middleware deployed
 
|havana
 
|-
 
|provider['iaas_middleware_developer']
 
|The developer of the IaaS middleware deployed
 
|OpenStack
 
|-
 
|provider['iaas_hypervisor']
 
|The hypervisor deployed in the IaaS middleware
 
|KVM
 
|-
 
|provider['iaas_hypervisor_version']
 
|The version of the hypervisor deployed in the IaaS middleware
 
|1.5.0
 
|-
 
|provider['iaas_capabilities']
 
|This variable contains a list of strings "('string1','string2','string3,..)", those strings describe the capabilities provided by the cloud IaaS service. Please note that the strings are not formalized anywhere, new labels should be agreed within the Task Force, for the moment.
 
|('cloud.managementSystem','cloud.vm.uploadImage')
 
|-
 
|provider['iaas_endpoints']
 
|List of the endpoints to reach the IaaS service. This is a list of python dictionaries. Basically the format is "({endpoint1},{endpoint2},{endpoint3}...). the structure of the single ''{endpoint}'' bits will be described in an additional table. If no IaaS service is provided, please leave this array empty.
 
|
 
|-
 
|provider['os_tpl']
 
|List of the OS templates available in the cloud IaaS service, that is the different virtual machines images available to be instantiated by the user. It is a list of python dictionaries. Basically the format is "({os_tpl1},{os_tpl2},{os_tpl3}...). the structure of the single ''{os_tpl}'' bits will be described in an additional table.
 
|
 
|-
 
|provider['resource_tpl']
 
|List of the Resource templates (flavors) available in the cloud IaaS service, that is the different virtual machines virtual hardware resources (RAM, CPU, etc...) available to be instantiated by the user. It is a list of python dictionaries. Basically the format is "({resource_tpl1},{resource_tpl2},{resource_tpl3}...). the structure of the single ''{resource_tpl}'' bits will be described in an additional table.
 
|
 
|-
 
|provider['staas_middleware']
 
|The name of the STaaS middleware
 
|OpenStack Swift
 
|-
 
|provider['staas_middleware_version']
 
|The version of the STaaS middleware deployed
 
|havana
 
|-
 
|provider['staas_middleware_developer']
 
|The developer of the STaaS middleware deployed
 
|OpenStack
 
|-
 
|provider['staas_capabilities']
 
|This variable contains a list of strings "('string1','string2','string3,..)", those strings describe the capabilities provided by the cloud STaaS service. Please note that the strings are not formalized anywhere, new labels should be agreed within the Task Force, for the moment.
 
|('cloud.data.upload')
 
|-
 
|provider['staas_endpoints']
 
|List of the endpoints to reach the STaaS service. This is a list of python dictionaries. Basically the format is "({endpoint1},{endpoint2},{endpoint3}...). the structure of the single ''{endpoint}'' bits will be described in an additional table. If no STaaS service is provided, please leave this array empty.
 
|}
 
  
 +
=== Other ===
 +
For all the other middleware, you can setup all the middleware information statically. To do so:
  
===IaaS Endpoint instance===
+
* Copy the sample provider configuration file to the default software configuration file
It is a python dictionary, the format is: {'label1':'value1' , 'label2':'value2'....}.  
+
cp /opt/cloud-info-provider/etc/sample.static.yaml /opt/cloud-info-provider/etc/bdii.yaml
 +
* Edit the <code>/opt/cloud-info-provider/etc/bdii.yaml</code> configuration, setting up all the site compute and storage resource information. Most of the information to be provider is self explanatory or specified in the comments. Below there is a set of notes who can be relevant during the configuration.
  
If a cloud IaaS service provide different interfaces, such as OCCI and EC2, it should publish different endpoints with different implementation.
+
''Configuration notes:''
 +
* Keep always full_bdii_ldif set to False
 +
* ''site'' parameters can be left commented, since they will be automatically retreived from the ''/etc/glite-info-static/site/site.cfg'' configuration file
  
{| border="1" cellspacing="5" cellpadding="5" class="wikitable" style="border-collapse: collapse; border:1px solid black;"
+
== Test configuration ==
!Label
+
Run manually the cloud-provider script and check that the output is correctly imported into the BDII. To do so, execute
!Description
 
!Examples
 
|-
 
|endpoint_url
 
|The URL to reach the service endpoint, it should contain the protocol (e.g. ''https://'') and the port, if it is not the standard port for the protocol.
 
|https://one.cloud.gwdg.de:8443
 
|-
 
|endpoint_interface
 
|The interface implemented by the endpoint.
 
|OCCI
 
|-
 
|service_type_name/version/developer
 
|The scripts fills this with the 'iaas_middleware_name/version/developer' variables described above
 
|
 
|-
 
|interface_version
 
|The version of the specification of the interface implemented. The script fills this with interface['IaaS_api_version'] described above.
 
|interface['IaaS_api_version']
 
|-
 
|endpoint_technology
 
|This is the architecture implemented by the web service. The script fills this with interface['IaaS_api_endpoint_technology'] described above.
 
|interface['IaaS_api_endpoint_technology']
 
|-
 
|auth_method
 
|This is the AAI implemented by the web service. The script fills this with interface['IaaS_api_authorization_method'] described above.
 
|interface['IaaS_api_authorization_method']
 
|}
 
  
===OS_Tpl instance===
+
/usr/bin/cloud-info-provider-service > cloud-ldif.ldif
If a cloud service provide a list of default virtual images that can be instantiate by the user, they should be advertised with a list of execution environment.
 
It is a python dictionary, the format is: {'label1':'value1' , 'label2':'value2'....}.  
 
  
{| border="1" cellspacing="5" cellpadding="5" class="wikitable" style="border-collapse: collapse; border:1px solid black;"
+
Then open the cloud-ldif.ldif fiel and check that no error is present. Then, import the data into the LDAP via
!Label
 
!Description
 
!Examples
 
|-
 
|image_name
 
|name of the OS image. It should be self-explanatory and directly presentable to the user
 
|Scientific Linux 6.4 (x86_64)
 
|-
 
|image_version
 
|version of the OS image. Note that this is not the version of the OS, but the version of the virtual image
 
|1.0
 
|-
 
|marketplace_id
 
|URL of the image into the reference marketplace. For the Federated Cloud, this should point to the AppDB. To obtain this value from AppDB, go into the Virtual Machine image version page (eg. http://appdb.egi.eu/store/vm/image/2c24de6c-e385-49f1-b64f-f9ff35e70f43:9/ and copy the XML link provided by the interface)
 
|http://appdb.egi.eu/store/vm/image/2c24de6c-e385-49f1-b64f-f9ff35e70f43:9/xml
 
|-
 
|occi_id
 
|OCCI ID of the image on the site. It shall contain the full mixin to be used in the OCCI create compute API call.
 
|http://fedcloud.egi.eu/occi/infrastructure/os_tpl#ef13c0be-4de6-428f-ad5b-8f32b31a54a1
 
|-
 
|os_family
 
|This is the operating system type provided in the virtual image, the currently formalized families are: linux, windows, macosx, solaris
 
|linux
 
|-
 
|os_name
 
|The name of the operating system.
 
|suse
 
|-
 
|os_version
 
|The version of the operating system
 
|6.0.4
 
|-
 
|platform
 
|The processor architecture (i386,amd64,itanium, sparc, powerpc)
 
|amd64
 
|}
 
  
===Resource_Tpl instance===
+
ldapdelete -H ldap://full.hostname:2170 'GLUE2GroupID=cloud,GLUE2DomainID=<your site>,o=glue' -D 'your LDAP admin DN' -W
If a cloud service provide a list of pre-defined virtual hardware resource templates (flavors) that can be instantiate by the user, they should be advertised with a list of resource templates.
+
ldapadd -f cloud-ldif.ldif -H ldap://full.hostname:2170 -D 'your LDAP admin DN' -W
It is a python dictionary, the format is: {'label1':'value1' , 'label2':'value2'....}.
 
  
{| border="1" cellspacing="5" cellpadding="5" class="wikitable" style="border-collapse: collapse; border:1px solid black;"
+
and check that the cloud data has been successfully added via
!Label
 
!Description
 
!Examples
 
|-
 
|occi_id
 
|OCCI ID of the resource template on the site. It shall contain the full mixin to be used in the OCCI create compute API call.
 
|http://fedcloud.egi.eu/occi/infrastructure/resource_tpl#tiny-with-disk
 
|-
 
|memory
 
|The amount of RAM memory associated to the VM instance (MB)
 
|512
 
|-
 
|cpu
 
|The number of virtual CPU associated to the VM instance
 
|2
 
|-
 
|platform
 
|The virtual processor architecture (i386,amd64,itanium, sparc, powerpc)
 
|amd64
 
|-
 
|network
 
|The type of IP provided to the VM (public,private)
 
|public
 
|}
 
  
=== STaaS Endpoint instance===
+
ldapsearch -x -H ldap://full.hostname:2170 -b 'GLUE2GroupID=cloud,GLUE2DomainID=<your site>,o=glue'
It is a python dictionary, the format is: {'label1':'value1' , 'label2':'value2'....}.
 
  
If a cloud IaaS service provide different interfaces, such as CDMI and S3, it should publish different endpoints with different implementation.
+
== Enable provider ==
 +
To enable the provider, just link the executable to the BDII provider directory (as default: <code>/var/lib/bdii/gip/provider/</code> or the BDII_PROVIDER_DIR  path set into <code>/etc/bdii/bdii.conf</code>)
  
{| border="1" cellspacing="5" cellpadding="5" class="wikitable" style="border-collapse: collapse; border:1px solid black;"
+
ln -fs /usr/bin/cloud-info-provider-service /var/lib/bdii/gip/provider/
!Label
 
!Description
 
!Examples
 
|-
 
|endpoint_url
 
|The URL to reach the service endpoint, it should contain the protocol (e.g. ''https://'') and the port, if it is not the standard port for the protocol.
 
|https://one.cloud.gwdg.de:8443
 
|-
 
|endpoint_interface
 
|The interface implemented by the endpoint.
 
|OCCI
 
|-
 
|service_type_name/version/developer
 
|The scripts fills this with the 'iaas_middleware_name/version/developer' variables described above
 
|
 
|-
 
|interface_version
 
|The version of the specification of the interface implemented. The script fills this with interface['staas_api_version'] described above.
 
|interface['staas_api_version']
 
|-
 
|endpoint_technology
 
|This is the architecture implemented by the web service. The script fills this with interface['STaaS_api_endpoint_technology'] described above.
 
|interface['STaaS_api_endpoint_technology']
 
|-
 
|auth_method
 
|This is the AAI implemented by the web service. The script fills this with interface['STaaS_api_authorization_method'] described above.
 
|interface['STaaS_api_authorization_method']
 
|}
 

Revision as of 18:34, 21 July 2014

Purpose

This page provides instructions on how to configure the Federated Cloud Production BDII.

Installation guide

Pre-requisites

This guide has the following pre-requisites:

  • Site-BDII, with support to GLUE2 schema specifications. If you do not have already a site BDII installed from EMI or UMD you can follow the installation guide below
  • Python 2.2.x

Install Site-BDII

If you have already a production BDII (eg. for existing Grid or storage resources), you can skip this step. Otherwise, here is a quick guide on how to install and configure a site BDII:

  1. Install UMD repository according to the instructions here: http://repository.egi.eu/category/umd_releases/distribution/umd-3/
  2. Install Site-BDII packages: yum install bdii bdii-config-site
  3. Edit the file /etc/glite-info-static/site/site.cfg with your site information
  4. Start the BDII service: service bdii start; chkconfig bdii on
  5. Configure your GOCDB site information the 'GIIS URL' with the address of your site BDII and the base schema, (eg: ldap://prisma-cloud.ba.infn.it:2170/GLUE2DomainID=PRISMA-INFN-BARI,o=glue )

Install the cloud resource provider script

For filling the BDII with the cloud resource information, you need to install the cloud resource provider script.

For RHEL/CentOS/ScientifcLinux 6.x

  1. Install EPEL (follow instructions here)
  2. Install cloud provider script via RPM
yum localinstall http://github.com/EGI-FCTF/BDIIscripts/raw/master/rpm/cloud-info-provider-service-0.2-1.el6.noarch.rpm

For other OSes (Install from sources)

git clone http://github.com/EGI-FCTF/BDIIscripts/
cd BDIIscripts
pip install -e .

Configuration guide

Configure middleware backend

The cloud provider script information is retrieved partially from a static configuration file and partially from the cloud middleware directly. Thus, the configuration depends on which middleware you have installed.

OpenNebula

NOTE: This is a pure OpenNebula installation. If you have installed OpenNebula via rOCCI, refer to the OpenNebula via rOCCI guide.

  • Copy the sample provider configuration file to the default software configuration file
cp /opt/cloud-info-provider/etc/sample.opennebula.yaml /opt/cloud-info-provider/etc/bdii.yaml
  • Edit the /opt/cloud-info-provider/etc/bdii.yaml configuration, setting up the site permanent information and the OpenNebula connection information. Most of the information to be provider is self explanatory or specified in the comments. Below there is a set of notes who can be relevant during the configuration.

Configuration notes:

  • Keep always full_bdii_ldif set to False
  • You need to specify connection parameters to the OpenNebula XML-RPC interface. on_auth should contain the authorization parameters for an existing user with full read permissions on the image disks. If the user has been created with the core driver, this parameter shall be set to <username>:<password>. on_rpcxml_endpoint shall contain the address of the RPCv2 endpoint. Usually it is http://myipaddress:2633/RPC2 . If not on a secure network, it is suggested to provide this interface via https, since the on_auth parameter will be sent in clear text to the server.
  • site parameters can be left commented, since they will be automatically retreived from the /etc/glite-info-static/site/site.cfg configuration file
  • Compute templates can be ignored, since OpenNebula has no concept of resource flavours
  • Object storage services (STorage-as-a-service) can be set statically. As they are not provided by OpenNebula, they can be ignored or set to the ones provided by other middleware.

OpenNebula via rOCCI

  • Copy the sample provider configuration file to the default software configuration file
cp /opt/cloud-info-provider/etc/sample.opennebularocci.yaml /opt/cloud-info-provider/etc/bdii.yaml
  • Edit the /opt/cloud-info-provider/etc/bdii.yaml configuration, setting up the site permanent information and the OpenStack connection information. Most of the information to be provider is self explanatory or specified in the comments. Below there is a set of notes who can be relevant during the configuration.

Configuration notes:

  • Keep always full_bdii_ldif set to False
  • You need to specify connection parameters to the OpenStack interface. on_auth should contain the authorization parameters for an existing user with full read permissions on the image disks. If the user has been created with the core driver, this parameter shall be set to <username>:<password>. on_rpcxml_endpoint shall contain the address of the RPCv2 endpoint. Usually it is http://myipaddress:2633/RPC2 . If not on a secure network, it is suggested to provide this interface via https, since the on_auth parameter will be sent in clear text to the server.
  • site parameters can be left commented, since they will be automatically retreived from the /etc/glite-info-static/site/site.cfg configuration file
  • Compute templates can be gathered in two ways: directly from rOCCI configuration, by setting up the template_dir parameter to the rOCCI configuration folder or manually by placing them in the configuration file. One option does not preclude the other and the resulting templates will be the merge of the two.
  • Images are retrieved from the OpenNebula templates, to mimic the behavior of rOCCI.
  • Object storage services (STorage-as-a-service) can be set statically. As they are not provided by OpenNebula, they can be ignored or set to the ones provided by other middleware.

OpenStack

  • Install the Nova Python SDK (needed by the OpenStack driver). You should have packages for that. In RHEL, they are provided by EPEL and you can install them via
yum install -y python-novaclient
  • Copy the sample provider configuration file to the default software configuration file
cp /opt/cloud-info-provider/etc/sample.openstack.yaml /opt/cloud-info-provider/etc/bdii.yaml
  • Edit the /opt/cloud-info-provider/etc/bdii.yaml configuration, setting up the site permanent information and the OpenStack connection information. Most of the information to be provider is self explanatory or specified in the file comments. Below there is a set of notes who can be relevant during the configuration.

Configuration notes:

  • Keep always full_bdii_ldif set to False
  • You need to specify connection parameters to the OpenStack Auth service (Keystone). OpenStack will then get the API endpoints from Keystone.
  • Be sure that keystone contains the OCCI endpoint, otherwise it will not be published by the BDII. You can check this via the command keystone service-list. To create a new service and endpoint, you can run keystone service-create --name nova --type occi --description 'Nova OCCI Service' and then keystone endpoint-create --service_id 8e6de5d0d7624584bed6bec9bef7c9e0 --region RegionOne --publicurl http://$HOSTNAME:8787/ --internalurl http://$HOSTNAME:8787/ --adminurl http://$HOSTNAME:8787/ where the service_id is the one obtained from keystone service-list
  • In production environments, it is recommended to set the insecure parameter in the options to False and uncomment the os_cacert
  • site parameters can be left commented, since they will be automatically retreived from the /etc/glite-info-static/site/site.cfg configuration file

Other

For all the other middleware, you can setup all the middleware information statically. To do so:

  • Copy the sample provider configuration file to the default software configuration file
cp /opt/cloud-info-provider/etc/sample.static.yaml /opt/cloud-info-provider/etc/bdii.yaml
  • Edit the /opt/cloud-info-provider/etc/bdii.yaml configuration, setting up all the site compute and storage resource information. Most of the information to be provider is self explanatory or specified in the comments. Below there is a set of notes who can be relevant during the configuration.

Configuration notes:

  • Keep always full_bdii_ldif set to False
  • site parameters can be left commented, since they will be automatically retreived from the /etc/glite-info-static/site/site.cfg configuration file

Test configuration

Run manually the cloud-provider script and check that the output is correctly imported into the BDII. To do so, execute

/usr/bin/cloud-info-provider-service > cloud-ldif.ldif

Then open the cloud-ldif.ldif fiel and check that no error is present. Then, import the data into the LDAP via

ldapdelete -H ldap://full.hostname:2170 'GLUE2GroupID=cloud,GLUE2DomainID=<your site>,o=glue' -D 'your LDAP admin DN' -W
ldapadd -f cloud-ldif.ldif -H ldap://full.hostname:2170 -D 'your LDAP admin DN' -W

and check that the cloud data has been successfully added via

ldapsearch -x -H ldap://full.hostname:2170 -b 'GLUE2GroupID=cloud,GLUE2DomainID=<your site>,o=glue'

Enable provider

To enable the provider, just link the executable to the BDII provider directory (as default: /var/lib/bdii/gip/provider/ or the BDII_PROVIDER_DIR path set into /etc/bdii/bdii.conf)

ln -fs /usr/bin/cloud-info-provider-service /var/lib/bdii/gip/provider/