Difference between revisions of "HOWTO16 How to enable a Virtual Organisation on a EGI Federated Cloud"
(19 intermediate revisions by 4 users not shown) | |||
Line 1: | Line 1: | ||
{{Template:Op menubar}} {{Template:Doc_menubar}} {{TOC_right}} | |||
<br> | |||
This page provides information how to enable a Virtual Organisation on a EGI Federated Cloud | |||
= Basic VOs configuration = | |||
Every member of the federation is expected to support [http://operations-portal.egi.eu/vo/view/voname/dteam dteam] and [http://operations-portal.egi.eu/vo/view/voname/ops ops] VOs. Support to [http://operations-portal.egi.eu/vo/view/voname/fedcloud.egi.eu fedcloud.egi.eu] is welcome. | |||
You need need to include the appropriate <code>.lsc</code> files for each VO at <code>/etc/grid-security/vomsdir/</code>: | |||
<pre> | <pre> | ||
$ cat /etc/grid-security/vomsdir/fedcloud.egi.eu/voms1. | mkdir -p /etc/grid-security/vomsdir/fedcloud.egi.eu | ||
/DC=org/DC=terena/DC=tcs/ | |||
/C=NL/O=TERENA/CN=TERENA eScience SSL CA | cat > /etc/grid-security/vomsdir/fedcloud.egi.eu/voms1.grid.cesnet.cz.lsc << EOF | ||
/DC=org/DC=terena/DC=tcs/C=CZ/ST=Hlavni mesto Praha/L=Praha 6/O=CESNET/CN=voms1.grid.cesnet.cz | |||
/C=NL/ST=Noord-Holland/L=Amsterdam/O=TERENA/CN=TERENA eScience SSL CA 3 | |||
EOF | |||
cat > /etc/grid-security/vomsdir/fedcloud.egi.eu/voms2.grid.cesnet.cz.lsc << EOF | |||
/DC=cz/DC=cesnet-ca/O=CESNET/CN=voms2.grid.cesnet.cz | |||
/DC=cz/DC=cesnet-ca/O=CESNET CA/CN=CESNET CA 3 | |||
EOF | |||
mkdir -p /etc/grid-security/vomsdir/dteam | |||
cat > /etc/grid-security/vomsdir/dteam/voms.hellasgrid.gr.lsc << EOF | |||
/C=GR/O=HellasGrid/OU=hellasgrid.gr/CN=voms.hellasgrid.gr | |||
/C=GR/O=HellasGrid/OU=Certification Authorities/CN=HellasGrid CA 2006 | |||
EOF | |||
cat > /etc/grid-security/vomsdir/dteam/voms2.hellasgrid.gr.lsc << EOF | |||
/C=GR/O=HellasGrid/OU=hellasgrid.gr/CN=voms2.hellasgrid.gr | |||
/C=GR/O=HellasGrid/OU=Certification Authorities/CN=HellasGrid CA 2006 | |||
EOF | |||
mkdir -p /etc/grid-security/vomsdir/ops | |||
cat > /etc/grid-security/vomsdir/ops/lcg-voms2.cern.ch.lsc << EOF | |||
/DC=ch/DC=cern/OU=computers/CN=lcg-voms2.cern.ch | |||
/DC=ch/DC=cern/CN=CERN Grid Certification Authority | |||
EOF | |||
cat > /etc/grid-security/vomsdir/ops/voms2.cern.ch.lsc << EOF | |||
/DC=ch/DC=cern/OU=computers/CN=voms2.cern.ch | |||
/DC=ch/DC=cern/CN=CERN Grid Certification Authority | |||
EOF | |||
</pre> | |||
= OpenNebula = | |||
Assuming that you are using ''OpenNebula v5.x'', ''rOCCI-server v2.x'', you have to perform the following steps to support a new Virtual Organization: | |||
*Configure [[Fedcloud-tf:Support a new Virtual Organisation#VOMS.2FGridSite|VOMS/GridSite]] | |||
*Create a new group in [[Fedcloud-tf:Support a new Virtual Organisation#OpenNebula|OpenNebula]] | |||
== VOMS/GridSite == | |||
For each allowed VO, you need a subdirectory in /etc/grid-security/vomsdir/ that contains the ''lsc'' files of all truted VOMS servers for the given VO. The ''lsc'' files must be named as the fully qualified host name of the VOMS server with an ''lsc'' extension and must contain: | |||
*First line: subject DN of the VOMS server host certificate | |||
*Second line: subject DN of the CA that issued the VOMS server host certificate | |||
For example, for the ''fedcloud.egi.eu'' VO, these would be: | |||
<pre>$ cat /etc/grid-security/vomsdir/fedcloud.egi.eu/voms1.grid.cesnet.cz.lsc | |||
/DC=org/DC=terena/DC=tcs/C=CZ/ST=Hlavni mesto Praha/L=Praha 6/O=CESNET/CN=voms1.grid.cesnet.cz | |||
/C=NL/ST=Noord-Holland/L=Amsterdam/O=TERENA/CN=TERENA eScience SSL CA 3 | |||
$ cat /etc/grid-security/vomsdir/fedcloud.egi.eu/voms2.grid.cesnet.cz.lsc | $ cat /etc/grid-security/vomsdir/fedcloud.egi.eu/voms2.grid.cesnet.cz.lsc | ||
/DC= | /DC=cz/DC=cesnet-ca/O=CESNET/CN=voms2.grid.cesnet.cz | ||
/ | /DC=cz/DC=cesnet-ca/O=CESNET CA/CN=CESNET CA 3 | ||
</pre> | </pre> | ||
== OpenNebula == | == OpenNebula == | ||
For | For each allowed VO, you need to create a group in OpenNebula with a matching name. Every group intended for use with federated authentication (VOMS or OIDC) must include the following attribute: | ||
<pre> | <pre> | ||
# the OpenNebula front-end | KEYSTORM=YES | ||
</pre> | |||
For example, for the ''fedcloud.egi.eu'' VO, the command to create the appropriate group would be: | |||
<pre># the OpenNebula front-end | |||
$ onegroup create fedcloud.egi.eu | $ onegroup create fedcloud.egi.eu | ||
$ onegroup update fedcloud.egi.eu | |||
# add KEYSTORM=YES in editor | |||
</pre> | </pre> | ||
= | = OpenStack = | ||
Assuming that you are using the [http://ifca.github.io/keystone-voms/ Keystone VOMS module] the steps needed are listed in the [https://keystone-voms.readthedocs.org/en/latest/configuration.html#allowed-vos VOMS module documentation]. | Assuming that you are using the [http://ifca.github.io/keystone-voms/ Keystone VOMS module] the steps needed are listed in the [https://keystone-voms.readthedocs.org/en/latest/configuration.html#allowed-vos VOMS module documentation]. | ||
== Keystone V2 == | == Keystone V2 == | ||
The configuration for the Keystone V2 authentitaion is as follows: | |||
The configuration for the Keystone V2 authentitaion is as follows: | |||
*Configure your LSC files according to the [http://italiangrid.github.io/voms/documentation/voms-clients-guide/3.0.3/#voms-trust VOMS documentation] | |||
*Create a tenant for your new VO: | |||
$ keystone tenant-create --name <tenant_name> --description "Tenant for VO <vo>" | |||
*Add the mapping to your <code>voms.json</code> mapping. It must be proper JSON (you can check its correctness with [http://jsonlint.com/ online] or with <code>python -mjson.tool /etc/keystone/voms.json</code>). Edit the file, and add an entry like this: | |||
{ | |||
"voname|FQAN": { | |||
"tenant": "tenant_name" | |||
} | |||
} | |||
*Note that you can use the FQAN from the incoming proxy, so you can map a group within a VO into a tenant, like this: | |||
{ | { | ||
"dteam": { | |||
"tenant": "dteam" | |||
}, | |||
"/dteam/NGI_IBERGRID": { | |||
"tenant": "dteam_ibergrid" | |||
} | |||
} | } | ||
* | |||
*Restart the Apache server, and it's done. | |||
=== Sample config === | |||
Below there is a sample <code>voms.json</code> file , adapt it with the appropriate names of your tenants (be sure that they exist before authenticating any user!): | |||
{ | { | ||
"fedcloud.egi.eu": { | |||
"tenant": "VO:fedcloud.egi.eu" | |||
}, | |||
"dteam": { | "dteam": { | ||
"tenant": "dteam" | "tenant": "VO:dteam" | ||
}, | }, | ||
" | "ops": { | ||
"tenant": " | "tenant": "VO:ops" | ||
} | } | ||
} | } | ||
[[Category:Operations_Manuals]] |
Revision as of 11:18, 22 November 2017
Main | EGI.eu operations services | Support | Documentation | Tools | Activities | Performance | Technology | Catch-all Services | Resource Allocation | Security |
Documentation menu: | Home • | Manuals • | Procedures • | Training • | Other • | Contact ► | For: | VO managers • | Administrators |
This page provides information how to enable a Virtual Organisation on a EGI Federated Cloud
Basic VOs configuration
Every member of the federation is expected to support dteam and ops VOs. Support to fedcloud.egi.eu is welcome.
You need need to include the appropriate .lsc
files for each VO at /etc/grid-security/vomsdir/
:
mkdir -p /etc/grid-security/vomsdir/fedcloud.egi.eu cat > /etc/grid-security/vomsdir/fedcloud.egi.eu/voms1.grid.cesnet.cz.lsc << EOF /DC=org/DC=terena/DC=tcs/C=CZ/ST=Hlavni mesto Praha/L=Praha 6/O=CESNET/CN=voms1.grid.cesnet.cz /C=NL/ST=Noord-Holland/L=Amsterdam/O=TERENA/CN=TERENA eScience SSL CA 3 EOF cat > /etc/grid-security/vomsdir/fedcloud.egi.eu/voms2.grid.cesnet.cz.lsc << EOF /DC=cz/DC=cesnet-ca/O=CESNET/CN=voms2.grid.cesnet.cz /DC=cz/DC=cesnet-ca/O=CESNET CA/CN=CESNET CA 3 EOF mkdir -p /etc/grid-security/vomsdir/dteam cat > /etc/grid-security/vomsdir/dteam/voms.hellasgrid.gr.lsc << EOF /C=GR/O=HellasGrid/OU=hellasgrid.gr/CN=voms.hellasgrid.gr /C=GR/O=HellasGrid/OU=Certification Authorities/CN=HellasGrid CA 2006 EOF cat > /etc/grid-security/vomsdir/dteam/voms2.hellasgrid.gr.lsc << EOF /C=GR/O=HellasGrid/OU=hellasgrid.gr/CN=voms2.hellasgrid.gr /C=GR/O=HellasGrid/OU=Certification Authorities/CN=HellasGrid CA 2006 EOF mkdir -p /etc/grid-security/vomsdir/ops cat > /etc/grid-security/vomsdir/ops/lcg-voms2.cern.ch.lsc << EOF /DC=ch/DC=cern/OU=computers/CN=lcg-voms2.cern.ch /DC=ch/DC=cern/CN=CERN Grid Certification Authority EOF cat > /etc/grid-security/vomsdir/ops/voms2.cern.ch.lsc << EOF /DC=ch/DC=cern/OU=computers/CN=voms2.cern.ch /DC=ch/DC=cern/CN=CERN Grid Certification Authority EOF
OpenNebula
Assuming that you are using OpenNebula v5.x, rOCCI-server v2.x, you have to perform the following steps to support a new Virtual Organization:
- Configure VOMS/GridSite
- Create a new group in OpenNebula
VOMS/GridSite
For each allowed VO, you need a subdirectory in /etc/grid-security/vomsdir/ that contains the lsc files of all truted VOMS servers for the given VO. The lsc files must be named as the fully qualified host name of the VOMS server with an lsc extension and must contain:
- First line: subject DN of the VOMS server host certificate
- Second line: subject DN of the CA that issued the VOMS server host certificate
For example, for the fedcloud.egi.eu VO, these would be:
$ cat /etc/grid-security/vomsdir/fedcloud.egi.eu/voms1.grid.cesnet.cz.lsc /DC=org/DC=terena/DC=tcs/C=CZ/ST=Hlavni mesto Praha/L=Praha 6/O=CESNET/CN=voms1.grid.cesnet.cz /C=NL/ST=Noord-Holland/L=Amsterdam/O=TERENA/CN=TERENA eScience SSL CA 3 $ cat /etc/grid-security/vomsdir/fedcloud.egi.eu/voms2.grid.cesnet.cz.lsc /DC=cz/DC=cesnet-ca/O=CESNET/CN=voms2.grid.cesnet.cz /DC=cz/DC=cesnet-ca/O=CESNET CA/CN=CESNET CA 3
OpenNebula
For each allowed VO, you need to create a group in OpenNebula with a matching name. Every group intended for use with federated authentication (VOMS or OIDC) must include the following attribute:
KEYSTORM=YES
For example, for the fedcloud.egi.eu VO, the command to create the appropriate group would be:
# the OpenNebula front-end $ onegroup create fedcloud.egi.eu $ onegroup update fedcloud.egi.eu # add KEYSTORM=YES in editor
OpenStack
Assuming that you are using the Keystone VOMS module the steps needed are listed in the VOMS module documentation.
Keystone V2
The configuration for the Keystone V2 authentitaion is as follows:
- Configure your LSC files according to the VOMS documentation
- Create a tenant for your new VO:
$ keystone tenant-create --name <tenant_name> --description "Tenant for VO <vo>"
- Add the mapping to your
voms.json
mapping. It must be proper JSON (you can check its correctness with online or withpython -mjson.tool /etc/keystone/voms.json
). Edit the file, and add an entry like this:
{ "voname|FQAN": { "tenant": "tenant_name" } }
- Note that you can use the FQAN from the incoming proxy, so you can map a group within a VO into a tenant, like this:
{ "dteam": { "tenant": "dteam" }, "/dteam/NGI_IBERGRID": { "tenant": "dteam_ibergrid" } }
- Restart the Apache server, and it's done.
Sample config
Below there is a sample voms.json
file , adapt it with the appropriate names of your tenants (be sure that they exist before authenticating any user!):
{ "fedcloud.egi.eu": { "tenant": "VO:fedcloud.egi.eu" }, "dteam": { "tenant": "VO:dteam" }, "ops": { "tenant": "VO:ops" } }