Federated Cloud Containers
Overview | For users | For resource providers | Infrastructure status | Site-specific configuration | Architecture |
EGI Federated Cloud clients docker image
EGI has produced a egifedcloud/fedcloud-inserinterface docker image with OCCI and VOMS clients configured to use them on the EGI FedCloud. If you have a working docker installation you can get it with the following command:
docker pull egifedcloud/fedcloud-userinterface
The image is based on ubuntu and has on top of it an installation of the latest versions of rOCCI-cli (as available in rOCCI-cli AppDB entry) and VOMS clients (as available in UMD). You can run the commands easily with docker:
docker run -it egifedcloud/fedcloud-userinterface occi [args]
or
docker run -it egifedcloud/fedcloud-userinterface voms-proxy-init [args]
To ease the usage of the docker client, you can get the git repository https://github.com/enolfc/fedcloud-userinterface where you can find one helper script: occi. This script will check if you have a valid proxy and create one for you if not found (expects to find certificates under ~/.globus
, check installation of certificate files for more information on certificates) and then runs the occi command against a endpoint defined in environment variable OCCI_ENDPOINT
with any options passed, e.g.:
OCCI_ENDPOINT=http://server4-epsh.unizar.es:8787 ./occi --action list --resource compute
will execute action list on resource compute for endpoint http://server4-epsh.unizar.es:8787.
Current directory will be mounted as a volume in /data
will be mounted at the container when using this script. For example, to use a context.sh
file as user_data:
./occi -a create -r compute -T user_data="file:///data/context.sh" [...]
Using Windows
In order to use the script on Windows follow this instructions (from the docker terminal):
- Follow the instructions below taking into account that in order to perform step 2 (copying the certificates to the machine) you can access your Windows home folder at
/c/Users/<user name>/
. For example of you have yourYourCert.p12
file at your Desktop, you can use the following command (user name here isenol
):cp /c/Users/enol/Desktop/YourCert.p12 .
, all the other steps remain the same. - Clone the git repository:
git clone https://github.com/enolfc/fedcloud-userinterface.git
- cd into the git repo and start using the commands:
cd fedcloud-userinterface OCCI_ENDPOINT=http://server4-epsh.unizar.es:8787 sh ./occi --action list --resource compute
Running Docker Containers in the EGI Federated Cloud
Docker containers can be executed at any EGI Federated Cloud site by either:
- (recommended) using a pre-configured image with docker like the EGI Docker image
- installing docker on top of an existing VM (e.g. by following the installation instructions on docker docs)
Using the EGI Docker image
The EGI Docker image is a VM image based on ubuntu 14.04 with docker installed and running. You can start that image as any other image available from AppDB.
- Go to the EGI Docker image entry in AppDB
- Check the IDs of the OCCI templates and endpoints to run the image for your VO at the selected site
- Use a ssh-key when creating the VM (check FAQ for more info)
- (Optional) Some sites may require the allocation of a public IP before you can log in
- Then you can either log in into the VM and use docker from there, or configure your docker client to connect to the remote VM
Using docker from inside the VM
You can log in with user ubuntu
and your private ssh key:
ssh -i <private key> ubuntu@<your VM ip>
Verify if docker is installed correctly. This command downloads a test image and runs it in a container.
ubuntu@fedcloud_vm:~$ sudo docker run hello-world Unable to find image 'hello-world:latest' locally latest: Pulling from library/hello-world b901d36b6f2f: Pull complete 0a6ba66e537a: Pull complete Digest: sha256:8be990ef2aeb16dbcb9271ddfe2610fa6658d13f6dfb8bc72074cc1ca36966a7 Status: Downloaded newer image for hello-world:latest Hello from Docker. This message shows that your installation appears to be working correctly. To generate this message, Docker took the following steps: 1. The Docker client contacted the Docker daemon. 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. 3. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading. 4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal. To try something more ambitious, you can run an Ubuntu container with: $ docker run -it ubuntu bash Share images, automate workflows, and more with a free Docker Hub account: https://hub.docker.com For more examples and ideas, visit: https://docs.docker.com/userguide/
Start using docker:
ubuntu@fedcloud_vm:~$ sudo docker run busybox echo "hello" Unable to find image 'busybox:latest' locally latest: Pulling from library/busybox c00ef186408b: Pull complete ac6a7980c6c2: Pull complete Digest: sha256:e4f93f6ed15a0cdd342f5aae387886fba0ab98af0a102da6276eaf24d6e6ade0 Status: Downloaded newer image for busybox:latest hello
Connect remotely to the VM
Alternatively, you can use docker-machine
to easily configure your VM to run docker commands from your computer. Use the following command to do so:
docker-machine create --driver generic --generic-ip-address <ip of your VM> \ --generic-ssh-user ubuntu \ --generic-ssh-key <your public ssh key> \ <a name for the VM>
then configure your shell to connect to that VM:
eval "$(docker-machine env <name of the VM>)"
and start using docker:
$ docker run docker/whalesay cowsay boo Unable to find image 'docker/whalesay:latest' locally latest: Pulling from docker/whalesay 2880a3395ede: Pull complete 515565c29c94: Pull complete 98b15185dba7: Pull complete 2ce633e3e9c9: Pull complete 35217eff2e30: Pull complete 326bddfde6c0: Pull complete 3a2e7fe79da7: Pull complete 517de05c9075: Pull complete 8f17e9411cf6: Pull complete ded5e192a685: Pull complete Digest: sha256:178598e51a26abbc958b8a2e48825c90bc22e641de3d31e18aaf55f3258ba93b Status: Downloaded newer image for docker/whalesay:latest _____ < boo > ----- \ \ \ ## . ## ## ## == ## ## ## ## === /""""""""""""""""___/ === ~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~ \______ o __/ \ \ __/ \____\______/
Clusters
You can run several docker clusters management tools on the EGI FedCloud. Here we detail the configuration process for some of them.
Docker Swarm
This page is under construction. |
Docker Swarm is native clustering for Docker. It turns a pool of Docker hosts into a single, virtual Docker host that can be accessed with the standard Docker API. Any tool that already communicates with a Docker daemon can use Swarm to transparently scale to multiple hosts.
In order to build a Swarm cluster in FedCloud you will need to start several VMs and configure them to run the Swarm. It is recommended that you set up TLS specially when you use the system over an untrusted network. You can find below instructions to deploy the Swarm using docker-machine (which will set up TLS for you) or manually. Instructions for similar deployment using orchestrators will be provided soon.
Docker-machine
Pre-requisites
- Install docker-machine
- Start the VMs at your preferred site that will form your initial cluster. These VMs must be accessible via ssh with a public key. You should have:
- 1 VM for the discovery service (consul)
- N VMs for the Swam nodes
Setting up discovery service
In order to use advanced Swarm features (such as overlay networks) you will need to set up a discovery service. For these instructions, we are using consul.
First, initialise docker on the discovery VM:
docker-machine create --driver generic \ --generic-ssh-key <your-ssh-key> \ --generic-ssh-user <your-ssh-user> --generic-ip-address <discovery-vm-ip> \ discovery
Once done, load the environment to use this VM with docker:
eval $(docker-machine env discovery)
And run the discovery service:
docker run -d -p 8500:8500 --name=consul progrium/consul -server -bootstrap
Save in a environment variable the IP address of the VM (preferably the private one) for later:
CONSUL_IP=<IP of your VM>
Start the Swarm
First start with docker-machine the Swarm manager:
docker-machine create --driver generic \ --generic-ssh-key <your-ssh-key> \ --generic-ssh-user <your-ssh-user> \ --generic-ip-address <manager-vm-ip> \ --swarm --swarm-master \ --swarm-discovery consul://$CONSUL_IP:8500 \ --engine-opt="cluster-store=consul://$CONSUL_IP:8500" \ --engine-opt="cluster-advertise=eth0:2376" \ manager
And for every other node in the cluster run the following command:
docker-machine create --driver generic \ --generic-ssh-key <your-ssh-key> \ --generic-ssh-user <your-ssh-user> \ --generic-ip-address <node-vm-ip> \ --swarm \ --swarm-discovery consul://$CONSUL_IP:8500 \ --engine-opt="cluster-store=consul://$CONSUL_IP:8500" \ --engine-opt="cluster-advertise=eth0:2376" \ <node-name>
When finished, you can check with docker-machine ls
all the VMs are ready:
$ docker-machine ls NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS discovery * generic Running tcp://172.16.8.149:2376 v1.11.2 manager - generic Running tcp://172.16.8.150:2376 manager (master) v1.11.2 node-1 - generic Running tcp://172.16.8.151:2376 manager v1.11.2 node-2 - generic Running tcp://172.16.8.152:2376 manager v1.11.2
To start using your Swarm, get the environment variables from docker-machine:
eval $(docker-machine env manager --swarm)
Check the swarm info:
$ docker info Containers: 4 Running: 4 Paused: 0 Stopped: 0 Images: 3 Server Version: swarm/1.2.3 Role: primary Strategy: spread Filters: health, port, containerslots, dependency, affinity, constraint Nodes: 3 manager: 172.16.8.150:2376 └ ID: NREI:A2CA:XHK3:ZFXZ:TUY3:2HWM:G6QI:WHMS:46OG:M6L5:BK4H:XAJD └ Status: Healthy └ Containers: 2 └ Reserved CPUs: 0 / 1 └ Reserved Memory: 0 B / 1.996 GiB └ Labels: executiondriver=, kernelversion=3.13.0-77-generic, operatingsystem=Ubuntu 14.04.4 LTS, provider=generic, storagedriver=aufs └ UpdatedAt: 2016-07-11T07:57:29Z └ ServerVersion: 1.11.2 node-1: 172.16.8.151:2376 └ ID: WTPE:7ZNL:W734:NB77:WDZA:45NR:JNWK:XUQ4:SEDP:UE4R:UDTQ:ORQ5 └ Status: Healthy └ Containers: 1 └ Reserved CPUs: 0 / 1 └ Reserved Memory: 0 B / 1.996 GiB └ Labels: executiondriver=, kernelversion=3.13.0-77-generic, operatingsystem=Ubuntu 14.04.4 LTS, provider=generic, storagedriver=aufs └ UpdatedAt: 2016-07-11T07:57:33Z └ ServerVersion: 1.11.2 node-2: 172.16.8.152:2376 └ ID: PZ5U:26WI:BRQW:WKA6:6WNM:E2MN:3OAT:AOM5:OCU4:IMIK:C53F:7ZHS └ Status: Healthy └ Containers: 1 └ Reserved CPUs: 0 / 1 └ Reserved Memory: 0 B / 1.996 GiB └ Labels: executiondriver=, kernelversion=3.13.0-77-generic, operatingsystem=Ubuntu 14.04.4 LTS, provider=generic, storagedriver=aufs └ UpdatedAt: 2016-07-11T07:57:14Z └ ServerVersion: 1.11.2 Plugins: Volume: Network: Kernel Version: 3.13.0-77-generic Operating System: linux Architecture: amd64 CPUs: 3 Total Memory: 5.987 GiB Name: bc777ee3f68c Docker Root Dir: Debug mode (client): false Debug mode (server): false WARNING: No kernel memory limit support
You can add more nodes as you need them with the previous docker-machine command.
Manual setup
These instructions allow you to setup a Swarm with TLS support manually. It requires the creation of a CA which issues certificates for your servers and clients.
Pre-requisites
- Start the VMs at your preferred site that will form your initial cluster. These VMs must be accessible via ssh with a public key. You should have:
- 1 VM for the discovery service (consul) and Swarm manager
- N VMs for the Swam nodes
- Install docker or use one of the docker-ready images provided as described above.
Note: the instructions below assume an Ubuntu system, some things may change for other distributions
Create your CA
First create a new CA under $HOME/certs:
mkdir $HOME/.certs CA_ID=$(uuid | cut -f1 -d"-") openssl req -x509 -nodes -days 1825 -newkey rsa:2048 \ -out $HOME/.certs/ca.pem -outform PEM -keyout $HOME/.certs/ca.key \ -subj "/DC=EU/DC=EGI/CN=Docker Swarm $CA_ID"
For each of your VMs, you will need to create a private key and certificate:
IP=<your VM IP> NAME=<your VM name> openssl genrsa -out $HOME/.certs/$NAME.key 2048 openssl req -subj "/CN=$NAME" -new -key $HOME/.certs/$NAME.key -out $HOME/.certs/$NAME.csr OPENSSL_CNF=`mktemp` cat > $OPENSSL_CNF << EOF [ v3_req ] subjectAltName = @alt_names # Extensions to add to a certificate request basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment [alt_names] IP.1 = $IP EOF openssl x509 -req -days 1825 -in $HOME/.certs/$NAME.csr -CA $HOME/.certs/ca.pem -CAkey $HOME/.certs/ca.key \ -CAcreateserial -out $HOME/.certs/$NAME.pem -extensions v3_req -extfile $OPENSSL_CNF rm $OPENSSL_CNF
Copy the certificates to the VMs and move them to /etc/docker
:
scp $.certs/<VM name>.pem <VM IP>:server.pem scp $.certs/<VM name>.key <VM IP>:server-key.pem scp $.certs/ca.pem <VM IP>:ca.pem ssh <VM IP> 'sudo mv *.pem /etc/docker'
Last, create a certificate for your client:
openssl genrsa -out $HOME/.certs/cert.key 2048 openssl req -subj "/CN=client" -new -key $HOME/.certs/key.pem -out $HOME/.certs/client.csr openssl x509 -req -days 1825 -in $HOME/.certs/client.csr -CA $HOME/.certs/ca.pem -CAkey $HOME/.certs/ca.key \ -CAcreateserial -out $HOME/.certs/cert.pem
Setup Swarm manager
Login into the VM that will act as manager and edit /etc/default/docker
and add the following contents:
DOCKER_OPTS=' -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --storage-driver aufs --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem '
And restart the docker daemon:
sudo service docker restart
Start the discovery service:
sudo docker run -d -p 8500:8500 --name=consul progrium/consul -server -bootstrap
And the Swarm manager:
sudo docker run -d -p 3376:3376 -v /etc/docker:/certs:ro swarm manage --tlsverify --tlscacert=/certs/ca.pem --tlscert=/certs/cert.pem --tlskey=/certs/key.pem --host=0.0.0.0:3376 consul://<ip of swarm master>:8500
Now you should be able to connect from your client machine to the swarm:
docker --tlsverify --tlscacert=$HOME/.certs/ca.pem --tlscert=$HOME/.certs/cert.pem --tlskey=$HOME/.certs/key.pem <ip of swarm master>:3376 info
Setup Swarm nodes
For each of the nodes of your swarm ssh and edit /etc/default/docker with the following contents
DOCKER_OPTS=' -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --storage-driver aufs --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --cluster-store=consul://<your swarm manager ip>:8500 --cluster-advertise=eth0:2376 '
Restart the docker daemon:
sudo service docker restart
And join the swarm:
sudo docker run -d swarm join --addr=<node ip>:2376 consul://<your swarm manager ip>:8500
Back to your client, you can check again that the nodes are in the Swarm:
docker --tlsverify --tlscacert=$HOME/.certs/ca.pem --tlscert=$HOME/.certs/client.pem --tlskey=$HOME/.certs/client.key <ip of swarm master>:3376 info Containers: 3 Running: 3 Paused: 0 Stopped: 0 Images: 3 Server Version: swarm/1.2.3 Role: primary Strategy: spread Filters: health, port, containerslots, dependency, affinity, constraint Nodes: 3 manualswarmb-1: 172.16.8.155:2376 └ ID: V7A6:E4D2:E7Z3:VQEY:JV7M:2FV4:OWXP:GQZH:KA77:CYRI:JINA:2XLT └ Status: Healthy └ Containers: 1 └ Reserved CPUs: 0 / 1 └ Reserved Memory: 0 B / 1.996 GiB └ Labels: executiondriver=, kernelversion=3.13.0-77-generic, operatingsystem=Ubuntu 14.04.4 LTS, storagedriver=aufs └ UpdatedAt: 2016-07-19T08:46:13Z └ ServerVersion: 1.11.2 manualswarmb-2: 172.16.8.154:2376 └ ID: ZXHJ:K2BB:ONRA:EWJY:H2YW:B3OE:ACWD:WAVO:PJY7:PDRO:362H:QFIE └ Status: Healthy └ Containers: 1 └ Reserved CPUs: 0 / 1 └ Reserved Memory: 0 B / 1.996 GiB └ Labels: executiondriver=, kernelversion=3.13.0-77-generic, operatingsystem=Ubuntu 14.04.4 LTS, storagedriver=aufs └ UpdatedAt: 2016-07-19T08:46:20Z └ ServerVersion: 1.11.2 manualswarmb-3: 172.16.8.156:2376 └ ID: UIPI:UQ7W:RPEX:LO4C:ZFAJ:LQ47:EFOC:G534:AL5N:C45E:2TIK:RLC3 └ Status: Healthy └ Containers: 1 └ Reserved CPUs: 0 / 1 └ Reserved Memory: 0 B / 1.996 GiB └ Labels: executiondriver=, kernelversion=3.13.0-77-generic, operatingsystem=Ubuntu 14.04.4 LTS, storagedriver=aufs └ UpdatedAt: 2016-07-19T08:46:03Z └ ServerVersion: 1.11.2 Plugins: Volume: Network: Kernel Version: 3.13.0-77-generic Operating System: linux Architecture: amd64 CPUs: 3 Total Memory: 5.987 GiB Name: 28a4ea0e9e61 Docker Root Dir: Debug mode (client): false Debug mode (server): false WARNING: No kernel memory limit support
For convenience you can set the following environment variables to simplify the docker commands:
DOCKER_HOST=<your swarm manager ip>:3376 DOCKER_TLS_VERIFY=1 DOCKER_CERT_PATH=/home/ubuntu/.certs