Alert.png The wiki is deprecated and due to be decommissioned by the end of September 2022.
The content is being migrated to other supports, new updates will be ignored and lost.
If needed you can get in touch with EGI SDIS team using operations @ egi.eu.

Difference between revisions of "Federated Cloud Containers"

From EGIWiki
Jump to navigation Jump to search
Line 1: Line 1:
{{Fedcloud_Menu}} {{TOC_right}}  
{{Fedcloud_Menu}} {{TOC_right}}  


= Running Docker Containers in the EGI Federated Cloud  =
Single-node containers can be executed at any EGI Federated Cloud site by either:  
 
Docker containers can be executed at any EGI Federated Cloud site by either:  
*'''(recommended)''' using a pre-configured image with docker like the [https://appdb.egi.eu/store/vappliance/docker.ubuntu.14.04 EGI Docker image]  
*'''(recommended)''' using a pre-configured image with docker like the [https://appdb.egi.eu/store/vappliance/docker.ubuntu.14.04 EGI Docker image]  
*installing docker on top of an existing VM (e.g. by following the [https://docs.docker.com/engine/installation/ installation instructions on docker docs])
*installing docker on top of an existing VM (e.g. by following the [https://docs.docker.com/engine/installation/ installation instructions on docker docs])


== Using the EGI Docker image ==
When using Docker for complex applications with several interrelated containers it is recommended to use some container orchestration platform like Kubernetes or use the Docker Swarm mode.
 
= EGI Docker image =
 
There are two Docker-ready images at the AppDB:
*  [https://appdb.egi.eu/store/vappliance/docker.ubuntu.14.04 EGI Docker Ubuntu 14.04 image],
*  [https://appdb.egi.eu/store/vappliance/docker.ubuntu.14.04 EGI Docker Ubuntu 16.04 image],


The  [https://appdb.egi.eu/store/vappliance/docker.ubuntu.14.04 EGI Docker image] is a VM image based on ubuntu 14.04 with docker installed and running. You can start that image as any other image available from AppDB.
You can start that image as any other image available from AppDB:
# Go to the [https://appdb.egi.eu/store/vappliance/docker.ubuntu.14.04 EGI Docker image entry in AppDB]
# Go to the [https://appdb.egi.eu/store/vappliance/docker.ubuntu.14.04 EGI Docker image entry in AppDB]
# Check the IDs of the OCCI templates and endpoints to run the image for your VO at the selected site
# Check the IDs of the OCCI templates and endpoints to run the image for your VO at the selected site
# Use a ssh-key when creating the VM ([[HOWTO11_How_to_use_the_rOCCI_Client#How_to_create_a_key_pair_to_access_the_VMs_via_SSH|check FAQ for more info]])
# Use a ssh-key when creating the VM ([[HOWTO11_How_to_use_the_rOCCI_Client#How_to_create_a_key_pair_to_access_the_VMs_via_SSH|check FAQ for more info]])
# ''(Optional)'' Some sites may require the [[HOWTO11_How_to_use_the_rOCCI_Client#How_to_attach_a_public_ip_address_to_a_compute_resource|allocation of a public IP]] before you can log in
# ''(Optional)'' Some sites may require the [[HOWTO11_How_to_use_the_rOCCI_Client#How_to_attach_a_public_ip_address_to_a_compute_resource|allocation of a public IP]] before you can log in
# Then you can either log in into the VM and use docker from there, or configure your docker client to connect to the remote VM
# Then you can either log in into the VM and use docker from there, or configure your docker client to connect to the remote VM.


=== Using docker from inside the VM ===
== Using docker from inside the VM ==


You can log in with user <code>ubuntu</code> and your private ssh key:
You can log in with user <code>ubuntu</code> and your private ssh key:
Line 104: Line 108:




== Clusters ==
= Container Orchestration ==
 
You can run several docker clusters management tools on the EGI FedCloud. Here we detail the configuration process for some of them.
 
=== Docker Swarm ===
 
[https://www.docker.com/products/docker-swarm Docker Swarm] is native clustering for Docker. It turns a pool of Docker hosts into a single, virtual Docker host that can be accessed with the standard Docker API. Any tool that already communicates with a Docker daemon can use Swarm to transparently scale to multiple hosts.
 
In order to build a Swarm cluster in FedCloud you will need to start several VMs and configure them to run the Swarm. It is recommended that you set up TLS specially when you use the system over an untrusted network. You can find below instructions to deploy the Swarm using docker-machine (which will set up TLS for you) or manually. Instructions for similar deployment using orchestrators will be provided soon.
 
==== Docker-machine ====
 
===== Pre-requisites =====
 
* Install [https://docs.docker.com/machine/install-machine/ docker-machine]
* Start the VMs at your preferred site that will form your initial cluster. These VMs must be accessible via ssh with a public key. You should have:
** 1 VM for the discovery service (consul)
** N VMs for the Swam nodes
 
===== Setting up discovery service =====
 
In order to use advanced Swarm features (such as overlay networks) you will need to set up a [https://docs.docker.com/swarm/discovery/ discovery service]. For these instructions, we are using [https://www.consul.io/ consul].
 
First, initialise docker on the discovery VM:
<pre>
docker-machine create --driver generic \
    --generic-ssh-key <your-ssh-key>  \
    --generic-ssh-user <your-ssh-user>
    --generic-ip-address <discovery-vm-ip> \
    discovery
</pre>
 
Once done, load the environment to use this VM with docker:
<pre>
eval $(docker-machine env discovery)
</pre>
 
And run the discovery service:
<pre>
docker run -d -p 8500:8500 --name=consul progrium/consul -server -bootstrap
</pre>
 
Save in a environment variable the IP address of the VM (preferably the private one) for later:
<pre>
CONSUL_IP=<IP of your VM>
</pre>
 
===== Start the Swarm =====
 
First start with docker-machine the Swarm manager:
<pre>
docker-machine create --driver generic \
    --generic-ssh-key <your-ssh-key>  \
    --generic-ssh-user <your-ssh-user> \
    --generic-ip-address <manager-vm-ip> \
    --swarm --swarm-master \
    --swarm-discovery consul://$CONSUL_IP:8500  \
    --engine-opt="cluster-store=consul://$CONSUL_IP:8500" \
    --engine-opt="cluster-advertise=eth0:2376" \
    manager
</pre>
 
And for every other node in the cluster run the following command:
<pre>
docker-machine create --driver generic \
    --generic-ssh-key <your-ssh-key>  \
    --generic-ssh-user <your-ssh-user> \
    --generic-ip-address <node-vm-ip> \
    --swarm \
    --swarm-discovery consul://$CONSUL_IP:8500  \
    --engine-opt="cluster-store=consul://$CONSUL_IP:8500" \
    --engine-opt="cluster-advertise=eth0:2376" \
    <node-name>
</pre>
 
When finished, you can check with <code>docker-machine ls</code> all the VMs are ready:
<pre>
$ docker-machine ls
NAME        ACTIVE  DRIVER    STATE    URL                      SWARM              DOCKER    ERRORS
discovery  *        generic  Running  tcp://172.16.8.149:2376                      v1.11.2
manager    -        generic  Running  tcp://172.16.8.150:2376  manager (master)  v1.11.2
node-1      -        generic  Running  tcp://172.16.8.151:2376  manager            v1.11.2
node-2      -        generic  Running  tcp://172.16.8.152:2376  manager            v1.11.2
</pre>
 
To start using your Swarm, get the environment variables from docker-machine:
<pre>
eval $(docker-machine env manager --swarm)
</pre>
 
Check the swarm info:
<pre>
$ docker info
Containers: 4
Running: 4
Paused: 0
Stopped: 0
Images: 3
Server Version: swarm/1.2.3
Role: primary
Strategy: spread
Filters: health, port, containerslots, dependency, affinity, constraint
Nodes: 3
manager: 172.16.8.150:2376
  └ ID: NREI:A2CA:XHK3:ZFXZ:TUY3:2HWM:G6QI:WHMS:46OG:M6L5:BK4H:XAJD
  └ Status: Healthy
  └ Containers: 2
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 1.996 GiB
  └ Labels: executiondriver=, kernelversion=3.13.0-77-generic, operatingsystem=Ubuntu 14.04.4 LTS, provider=generic, storagedriver=aufs
  └ UpdatedAt: 2016-07-11T07:57:29Z
  └ ServerVersion: 1.11.2
node-1: 172.16.8.151:2376
  └ ID: WTPE:7ZNL:W734:NB77:WDZA:45NR:JNWK:XUQ4:SEDP:UE4R:UDTQ:ORQ5
  └ Status: Healthy
  └ Containers: 1
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 1.996 GiB
  └ Labels: executiondriver=, kernelversion=3.13.0-77-generic, operatingsystem=Ubuntu 14.04.4 LTS, provider=generic, storagedriver=aufs
  └ UpdatedAt: 2016-07-11T07:57:33Z
  └ ServerVersion: 1.11.2
node-2: 172.16.8.152:2376
  └ ID: PZ5U:26WI:BRQW:WKA6:6WNM:E2MN:3OAT:AOM5:OCU4:IMIK:C53F:7ZHS
  └ Status: Healthy
  └ Containers: 1
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 1.996 GiB
  └ Labels: executiondriver=, kernelversion=3.13.0-77-generic, operatingsystem=Ubuntu 14.04.4 LTS, provider=generic, storagedriver=aufs
  └ UpdatedAt: 2016-07-11T07:57:14Z
  └ ServerVersion: 1.11.2
Plugins:
Volume:
Network:
Kernel Version: 3.13.0-77-generic
Operating System: linux
Architecture: amd64
CPUs: 3
Total Memory: 5.987 GiB
Name: bc777ee3f68c
Docker Root Dir:
Debug mode (client): false
Debug mode (server): false
WARNING: No kernel memory limit support
</pre>
 
You can add more nodes as you need them with the previous docker-machine command.
 
==== Manual setup ====
 
These instructions allow you to setup a Swarm with TLS support manually. It requires the creation of a CA which issues certificates for your servers and clients.
 
===== Pre-requisites =====
 
* Start the VMs at your preferred site that will form your initial cluster. These VMs must be accessible via ssh with a public key. You should have:
** 1 VM for the discovery service (consul) and Swarm manager
** N VMs for the Swam nodes
* [https://docs.docker.com/engine/installation/ Install docker] or use one of the docker-ready images provided as [[#Using_the_EGI_Docker_image|described above]].


Note: the instructions below assume an Ubuntu system, some things may change for other distributions
You can run several docker clusters management tools on the EGI FedCloud, each tool has its own specifics, but there are plenty of tools to aid in their setup. Here we cover how to configure Docker Swarm and Kubernetes by leveraging [https://www.ansible.com/ Ansible configuration management] and [http://www.grycap.upv.es/im/index.php Infrastructure Manager (IM) IaaS orchestrator]


===== Create your CA =====
== IM ==


First create a new CA under $HOME/certs:
For using IM, you will need to follow these steps first:
* [[Broken|create an account on the IM server]].
* Install the im client with pip (it is recommended you do this in a [https://virtualenv.pypa.io/en/stable/ <code>virtualenv</code>]):
pip install IM-client
* Create a authorization file, with the endpoints of the sites you plan to use, for example:
<pre>
<pre>
mkdir $HOME/.certs
cat > ~/.im_auth << EOF
CA_ID=$(uuid | cut -f1 -d"-")
id = im; type = InfrastructureManager; username = <youruser name>; password = <your password>
 
id = occi_bari; type = OCCI; proxy = file(/tmp/x509up_u1000); host = http://cloud.recas.ba.infn.it:8787/occi/
openssl req -x509 -nodes -days 1825 -newkey rsa:2048 \
id = occi_cesnet; type = OCCI; proxy = file(/tmp/x509up_u1000); host = https://carach5.ics.muni.cz:11443/
            -out $HOME/.certs/ca.pem -outform PEM -keyout $HOME/.certs/ca.key \
            -subj "/DC=EU/DC=EGI/CN=Docker Swarm $CA_ID"
</pre>
 
For each of your VMs, you will need to create a private key and certificate:
 
<pre>
IP=<your VM IP>
NAME=<your VM name>
 
openssl genrsa -out $HOME/.certs/$NAME.key 2048
openssl req -subj "/CN=$NAME" -new -key $HOME/.certs/$NAME.key -out $HOME/.certs/$NAME.csr
 
OPENSSL_CNF=`mktemp`
cat > $OPENSSL_CNF << EOF
[ v3_req ]
subjectAltName = @alt_names
# Extensions to add to a certificate request
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
 
[alt_names]
IP.1 = $IP
EOF
EOF
openssl x509 -req -days 1825 -in $HOME/.certs/$NAME.csr -CA $HOME/.certs/ca.pem -CAkey $HOME/.certs/ca.key \
    -CAcreateserial -out $HOME/.certs/$NAME.pem -extensions v3_req -extfile $OPENSSL_CNF
rm $OPENSSL_CNF
</pre>
Copy the certificates to the VMs and move them to <code>/etc/docker</code>:
<pre>
scp $HOME/.certs/<VM name>.pem <VM IP>:server.pem
scp $HOME/.certs/<VM name>.key <VM IP>:server-key.pem
scp $HOME/.certs/ca.pem <VM IP>:ca.pem
ssh <VM IP> 'sudo mv *.pem /etc/docker'
</pre>
</pre>
The endpoints of the services can be obtained with [[Federated_Cloud_APIs_and_SDKs#Discovery_of_resources discovery tools of FedCloud]]


Last, create a certificate for your client:
== Docker Swarm ==
<pre>
openssl genrsa -out $HOME/.certs/client.key 2048


openssl req -subj "/CN=client" -new -key $HOME/.certs/client.key -out $HOME/.certs/client.csr
[https://docs.docker.com/engine/swarm/ Swarm mode] is native Docker clustering technology. Since release 1.12 it is included with the Docker Engine and its configuration is greatly simplified.


openssl x509 -req -days 1825 -in $HOME/.certs/client.csr -CA $HOME/.certs/ca.pem -CAkey $HOME/.certs/ca.key \
=== Create the RADL description of your deployment ===
    -CAcreateserial -out $HOME/.certs/client.pem
</pre>


If you want to use environment variables to access the Swarm with the docker CLI as described below, you can also create a link to the client cert so it's found easily:
IM uses a [RADL file] that describes your infrastructure. You can use the following as a base to create your own deployment:
<pre>
ln -s $HOME/.certs/client.key $HOME/.certs/key.pem
ln -s $HOME/.certs/client.pem $HOME/.certs/cert.pem
</pre>


===== Setup Swarm manager =====
'''IN PREPARATION'''


Login into the VM that will act as manager and edit <code>/etc/default/docker</code> and add the following contents:
<pre>
<pre>
DOCKER_OPTS='
AA
-H tcp://0.0.0.0:2376
-H unix:///var/run/docker.sock
--storage-driver aufs
--tlsverify
--tlscacert /etc/docker/ca.pem
--tlscert /etc/docker/server.pem
--tlskey /etc/docker/server-key.pem
'
</pre>
</pre>


And restart the docker daemon:
<pre>
sudo service docker restart
</pre>
Start the discovery service:
<pre>
sudo docker run -d -p 8500:8500 --name=consul progrium/consul -server -bootstrap
</pre>


And the Swarm manager:
<pre>
sudo docker run -d -p 3376:3376 -v /etc/docker:/certs:ro  swarm manage \
                --tlsverify --tlscacert=/certs/ca.pem --tlscert=/certs/server.pem \
                --tlskey=/certs/server-key.pem --host=0.0.0.0:3376 consul://<ip of swarm master>:8500
</pre>
Now you should be able to connect from your client machine to the swarm:
<pre>
docker -H <ip of swarm master>:3376 \
      --tlsverify --tlscacert=$HOME/.certs/ca.pem \
      --tlscert=$HOME/.certs/client.pem --tlskey=$HOME/.certs/client.key \
      info
</pre>


===== Setup Swarm nodes =====


For each of the nodes of your swarm ssh and edit /etc/default/docker with the following contents
<pre>
DOCKER_OPTS='
-H tcp://0.0.0.0:2376
-H unix:///var/run/docker.sock
--storage-driver aufs
--tlsverify
--tlscacert /etc/docker/ca.pem
--tlscert /etc/docker/server.pem
--tlskey /etc/docker/server-key.pem
--cluster-store=consul://<your swarm manager ip>:8500 --cluster-advertise=eth0:2376
'
</pre>


Restart the docker daemon:
<pre>
sudo service docker restart
</pre>


And join the swarm:
<pre>
sudo docker run -d swarm join --addr=<node ip>:2376 consul://<your swarm manager ip>:8500
</pre>


Back to your client, you can check again that the nodes are in the Swarm:
<pre>
$ docker  --tlsverify --tlscacert=$HOME/.certs/ca.pem \
          --tlscert=$HOME/.certs/client.pem --tlskey=$HOME/.certs/client.key \
          -H <ip of swarm master>:3376 info
Containers: 3
Running: 3
Paused: 0
Stopped: 0
Images: 3
Server Version: swarm/1.2.3
Role: primary
Strategy: spread
Filters: health, port, containerslots, dependency, affinity, constraint
Nodes: 3
manualswarmb-1: 172.16.8.155:2376
  └ ID: V7A6:E4D2:E7Z3:VQEY:JV7M:2FV4:OWXP:GQZH:KA77:CYRI:JINA:2XLT
  └ Status: Healthy
  └ Containers: 1
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 1.996 GiB
  └ Labels: executiondriver=, kernelversion=3.13.0-77-generic, operatingsystem=Ubuntu 14.04.4 LTS, storagedriver=aufs
  └ UpdatedAt: 2016-07-19T08:46:13Z
  └ ServerVersion: 1.11.2
manualswarmb-2: 172.16.8.154:2376
  └ ID: ZXHJ:K2BB:ONRA:EWJY:H2YW:B3OE:ACWD:WAVO:PJY7:PDRO:362H:QFIE
  └ Status: Healthy
  └ Containers: 1
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 1.996 GiB
  └ Labels: executiondriver=, kernelversion=3.13.0-77-generic, operatingsystem=Ubuntu 14.04.4 LTS, storagedriver=aufs
  └ UpdatedAt: 2016-07-19T08:46:20Z
  └ ServerVersion: 1.11.2
manualswarmb-3: 172.16.8.156:2376
  └ ID: UIPI:UQ7W:RPEX:LO4C:ZFAJ:LQ47:EFOC:G534:AL5N:C45E:2TIK:RLC3
  └ Status: Healthy
  └ Containers: 1
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 1.996 GiB
  └ Labels: executiondriver=, kernelversion=3.13.0-77-generic, operatingsystem=Ubuntu 14.04.4 LTS, storagedriver=aufs
  └ UpdatedAt: 2016-07-19T08:46:03Z
  └ ServerVersion: 1.11.2
Plugins:
Volume:
Network:
Kernel Version: 3.13.0-77-generic
Operating System: linux
Architecture: amd64
CPUs: 3
Total Memory: 5.987 GiB
Name: 28a4ea0e9e61
Docker Root Dir:
Debug mode (client): false
Debug mode (server): false
WARNING: No kernel memory limit support
</pre>


For convenience you can set the following environment variables to simplify the docker commands:
<pre>
DOCKER_HOST=<your swarm manager ip>:3376
DOCKER_TLS_VERIFY=1
DOCKER_CERT_PATH=$HOME/.certs
</pre>


= Accessing the EGI Federated Cloud from a Docker container  =
= Accessing the EGI Federated Cloud from a Docker container  =

Revision as of 17:11, 7 March 2017

Overview For users For resource providers Infrastructure status Site-specific configuration Architecture




Single-node containers can be executed at any EGI Federated Cloud site by either:

When using Docker for complex applications with several interrelated containers it is recommended to use some container orchestration platform like Kubernetes or use the Docker Swarm mode.

EGI Docker image

There are two Docker-ready images at the AppDB:

You can start that image as any other image available from AppDB:

  1. Go to the EGI Docker image entry in AppDB
  2. Check the IDs of the OCCI templates and endpoints to run the image for your VO at the selected site
  3. Use a ssh-key when creating the VM (check FAQ for more info)
  4. (Optional) Some sites may require the allocation of a public IP before you can log in
  5. Then you can either log in into the VM and use docker from there, or configure your docker client to connect to the remote VM.

Using docker from inside the VM

You can log in with user ubuntu and your private ssh key:

ssh -i <private key> ubuntu@<your VM ip>

Verify if docker is installed correctly. This command downloads a test image and runs it in a container.

ubuntu@fedcloud_vm:~$ sudo docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
b901d36b6f2f: Pull complete
0a6ba66e537a: Pull complete
Digest: sha256:8be990ef2aeb16dbcb9271ddfe2610fa6658d13f6dfb8bc72074cc1ca36966a7
Status: Downloaded newer image for hello-world:latest
Hello from Docker.
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker Hub account:
 https://hub.docker.com
For more examples and ideas, visit:
 https://docs.docker.com/userguide/

Start using docker:

ubuntu@fedcloud_vm:~$ sudo docker run busybox echo "hello"
Unable to find image 'busybox:latest' locally
latest: Pulling from library/busybox
c00ef186408b: Pull complete
ac6a7980c6c2: Pull complete
Digest: sha256:e4f93f6ed15a0cdd342f5aae387886fba0ab98af0a102da6276eaf24d6e6ade0
Status: Downloaded newer image for busybox:latest
hello

Connect remotely to the VM

Alternatively, you can use docker-machine to easily configure your VM to run docker commands from your computer. Use the following command to do so:

docker-machine  create --driver generic --generic-ip-address <ip of your VM> \
                                        --generic-ssh-user ubuntu  \
                                        --generic-ssh-key <your public ssh key> \
                                        <a name for the VM>

then configure your shell to connect to that VM:

eval "$(docker-machine env <name of the VM>)"

and start using docker:

$ docker run docker/whalesay cowsay boo
Unable to find image 'docker/whalesay:latest' locally
latest: Pulling from docker/whalesay
2880a3395ede: Pull complete
515565c29c94: Pull complete
98b15185dba7: Pull complete
2ce633e3e9c9: Pull complete
35217eff2e30: Pull complete
326bddfde6c0: Pull complete
3a2e7fe79da7: Pull complete
517de05c9075: Pull complete
8f17e9411cf6: Pull complete
ded5e192a685: Pull complete
Digest: sha256:178598e51a26abbc958b8a2e48825c90bc22e641de3d31e18aaf55f3258ba93b
Status: Downloaded newer image for docker/whalesay:latest
 _____
< boo >
 -----
    \
     \
      \
                    ##        .
              ## ## ##       ==
           ## ## ## ##      ===
       /""""""""""""""""___/ ===
  ~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ /  ===- ~~~
       \______ o          __/
        \    \        __/
          \____\______/


Container Orchestration =

You can run several docker clusters management tools on the EGI FedCloud, each tool has its own specifics, but there are plenty of tools to aid in their setup. Here we cover how to configure Docker Swarm and Kubernetes by leveraging Ansible configuration management and Infrastructure Manager (IM) IaaS orchestrator

IM

For using IM, you will need to follow these steps first:

pip install IM-client
  • Create a authorization file, with the endpoints of the sites you plan to use, for example:
cat > ~/.im_auth << EOF 
id = im; type = InfrastructureManager; username = <youruser name>; password = <your password>
id = occi_bari; type = OCCI; proxy = file(/tmp/x509up_u1000); host = http://cloud.recas.ba.infn.it:8787/occi/
id = occi_cesnet; type = OCCI; proxy = file(/tmp/x509up_u1000); host = https://carach5.ics.muni.cz:11443/
EOF

The endpoints of the services can be obtained with Federated_Cloud_APIs_and_SDKs#Discovery_of_resources discovery tools of FedCloud

Docker Swarm

Swarm mode is native Docker clustering technology. Since release 1.12 it is included with the Docker Engine and its configuration is greatly simplified.

Create the RADL description of your deployment

IM uses a [RADL file] that describes your infrastructure. You can use the following as a base to create your own deployment:

IN PREPARATION

AA





Accessing the EGI Federated Cloud from a Docker container

EGI maintains a docker image with OCCI and VOMS clients ready-to-use to access the EGI Federated Cloud. If you have a working docker installation you can get it with the following command:

docker pull egifedcloud/fedcloud-userinterface

The image is based on ubuntu and has on top of it an installation of the latest versions of rOCCI-cli (as available in rOCCI-cli AppDB entry) and VOMS clients (as available in UMD). You can run the commands easily with docker:

docker run -it egifedcloud/fedcloud-userinterface occi [args]

or

docker run -it egifedcloud/fedcloud-userinterface voms-proxy-init [args]

To ease the usage of the docker client, you can get the git repository https://github.com/enolfc/fedcloud-userinterface where you can find one helper script: occi. This script will check if you have a valid proxy and create one for you if not found (expects to find certificates under ~/.globus, check installation of certificate files for more information on certificates) and then runs the occi command against a endpoint defined in environment variable OCCI_ENDPOINT with any options passed, e.g.:

OCCI_ENDPOINT=http://server4-epsh.unizar.es:8787 ./occi --action list --resource compute

will execute action list on resource compute for endpoint http://server4-epsh.unizar.es:8787.

Current directory will be mounted as a volume in /data will be mounted at the container when using this script. For example, to use a context.sh file as user_data:

./occi -a create -r compute -T user_data="file:///data/context.sh"  [...]


Using Windows

In order to use the script on Windows follow this instructions (from the docker terminal):

  1. Follow the instructions below taking into account that in order to perform step 2 (copying the certificates to the machine) you can access your Windows home folder at /c/Users/<user name>/. For example of you have your YourCert.p12 file at your Desktop, you can use the following command (user name here is enol): cp /c/Users/enol/Desktop/YourCert.p12 ., all the other steps remain the same.
  2. Clone the git repository: git clone https://github.com/enolfc/fedcloud-userinterface.git
  3. cd into the git repo and start using the commands:
cd fedcloud-userinterface
OCCI_ENDPOINT=http://server4-epsh.unizar.es:8787 sh ./occi --action list --resource compute