Alert.png The wiki is deprecated and due to be decommissioned by the end of September 2022.
The content is being migrated to other supports, new updates will be ignored and lost.
If needed you can get in touch with EGI SDIS team using operations @ egi.eu.

HOWTO16 How to enable a Virtual Organisation on a EGI Federated Cloud

From EGIWiki
Jump to navigation Jump to search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.
Main EGI.eu operations services Support Documentation Tools Activities Performance Technology Catch-all Services Resource Allocation Security


Documentation menu: Home Manuals Procedures Training Other Contact For: VO managers Administrators
Alert.png This page is Deprecated; the content has been moved to https://docs.egi.eu/providers/cloud-compute




This page provides information how to enable a Virtual Organisation on a EGI Federated Cloud

Basic VOs configuration

Every member of the federation is expected to support dteam and ops VOs. Support to fedcloud.egi.eu is welcome.

You need need to include the appropriate .lsc files for each VO at /etc/grid-security/vomsdir/:

mkdir -p /etc/grid-security/vomsdir/fedcloud.egi.eu

cat > /etc/grid-security/vomsdir/fedcloud.egi.eu/voms1.grid.cesnet.cz.lsc << EOF
/DC=org/DC=terena/DC=tcs/C=CZ/ST=Hlavni mesto Praha/L=Praha 6/O=CESNET/CN=voms1.grid.cesnet.cz
/C=NL/ST=Noord-Holland/L=Amsterdam/O=TERENA/CN=TERENA eScience SSL CA 3
EOF

cat > /etc/grid-security/vomsdir/fedcloud.egi.eu/voms2.grid.cesnet.cz.lsc << EOF
/DC=cz/DC=cesnet-ca/O=CESNET/CN=voms2.grid.cesnet.cz
/DC=cz/DC=cesnet-ca/O=CESNET CA/CN=CESNET CA 3
EOF

mkdir -p /etc/grid-security/vomsdir/dteam

cat > /etc/grid-security/vomsdir/dteam/voms.hellasgrid.gr.lsc << EOF
/C=GR/O=HellasGrid/OU=hellasgrid.gr/CN=voms.hellasgrid.gr
/C=GR/O=HellasGrid/OU=Certification Authorities/CN=HellasGrid CA 2006
EOF

cat > /etc/grid-security/vomsdir/dteam/voms2.hellasgrid.gr.lsc << EOF
/C=GR/O=HellasGrid/OU=hellasgrid.gr/CN=voms2.hellasgrid.gr
/C=GR/O=HellasGrid/OU=Certification Authorities/CN=HellasGrid CA 2006
EOF

mkdir -p /etc/grid-security/vomsdir/ops

cat > /etc/grid-security/vomsdir/ops/lcg-voms2.cern.ch.lsc << EOF
/DC=ch/DC=cern/OU=computers/CN=lcg-voms2.cern.ch
/DC=ch/DC=cern/CN=CERN Grid Certification Authority
EOF

cat > /etc/grid-security/vomsdir/ops/voms2.cern.ch.lsc << EOF
/DC=ch/DC=cern/OU=computers/CN=voms2.cern.ch
/DC=ch/DC=cern/CN=CERN Grid Certification Authority
EOF

OpenNebula

Assuming that you are using OpenNebula v5.x, rOCCI-server v2.x, you have to perform the following steps to support a new Virtual Organization:

VOMS/GridSite

For each allowed VO, you need a subdirectory in /etc/grid-security/vomsdir/ that contains the lsc files of all truted VOMS servers for the given VO. The lsc files must be named as the fully qualified host name of the VOMS server with an lsc extension and must contain:

  • First line: subject DN of the VOMS server host certificate
  • Second line: subject DN of the CA that issued the VOMS server host certificate

For example, for the fedcloud.egi.eu VO, these would be:

$ cat /etc/grid-security/vomsdir/fedcloud.egi.eu/voms1.grid.cesnet.cz.lsc
/DC=org/DC=terena/DC=tcs/C=CZ/ST=Hlavni mesto Praha/L=Praha 6/O=CESNET/CN=voms1.grid.cesnet.cz
/C=NL/ST=Noord-Holland/L=Amsterdam/O=TERENA/CN=TERENA eScience SSL CA 3

$ cat /etc/grid-security/vomsdir/fedcloud.egi.eu/voms2.grid.cesnet.cz.lsc 
/DC=cz/DC=cesnet-ca/O=CESNET/CN=voms2.grid.cesnet.cz
/DC=cz/DC=cesnet-ca/O=CESNET CA/CN=CESNET CA 3

OpenNebula

For each allowed VO, you need to create a group in OpenNebula with a matching name. Every group intended for use with federated authentication (VOMS or OIDC) must include the following attribute:

KEYSTORM=YES

For example, for the fedcloud.egi.eu VO, the command to create the appropriate group would be:

# the OpenNebula front-end
$ onegroup create fedcloud.egi.eu
$ onegroup update fedcloud.egi.eu
# add KEYSTORM=YES in editor

OpenStack

Assuming that you are using the Keystone VOMS module the steps needed are listed in the VOMS module documentation.

Keystone V2

The configuration for the Keystone V2 authentitaion is as follows:

  • Configure your LSC files according to the VOMS documentation
  • Create a tenant for your new VO:
$ keystone tenant-create --name <tenant_name> --description "Tenant for VO <vo>"
  • Add the mapping to your voms.json mapping. It must be proper JSON (you can check its correctness with online or with python -mjson.tool /etc/keystone/voms.json). Edit the file, and add an entry like this:
{
   "voname|FQAN": {
       "tenant": "tenant_name"
   }
}


  • Note that you can use the FQAN from the incoming proxy, so you can map a group within a VO into a tenant, like this:
{
   "dteam": {
       "tenant": "dteam"
   },
   "/dteam/NGI_IBERGRID": {
       "tenant": "dteam_ibergrid"
   }
}
  • Restart the Apache server, and it's done.

Sample config

Below there is a sample voms.json file , adapt it with the appropriate names of your tenants (be sure that they exist before authenticating any user!):

{
    "fedcloud.egi.eu": {
        "tenant": "VO:fedcloud.egi.eu"
    },
    "dteam": {
        "tenant": "VO:dteam"
    },
    "ops": {
        "tenant": "VO:ops"
    }
}