Alert.png The wiki is deprecated and due to be decommissioned by the end of September 2022.
The content is being migrated to other supports, new updates will be ignored and lost.
If needed you can get in touch with EGI SDIS team using operations @ egi.eu.

Federated Cloud GPGPU

From EGIWiki
Jump to navigation Jump to search

GPGPU on Federated Cloud

GPGPU is available on selected sites of the EGI Federated Cloud, described in the table below:

Site Name Supported VOs Endpoints GPU templates/flavors
IISAS-GPU fedcloud.egi.eu

ops
dteam
moldyngrid
enmr.eu
vo.lifewatch.eu
acc-comp.egi.eu

OCCI: https://nova3.ui.savba.sk:8787/occi1.1/

OpenStack: https://keystone3.ui.savba.sk:5000/v2.0

gpu1cpu6 (1GPU + 6 CPU cores)

gpu2cpu12 (2GPU +12 CPU cores)

IISAS-Nebula acc-comp.egi.eu OCCI: https://nebula2.ui.savba.sk:11443/ extra_large_2gpu (8 cores and 8 GB RAM + 2 GPU Nvidia K20m)

extra_large_gpu (8 cores and 8 GB RAM + 1 GPU Nvidia K20m)
goliath_2gpu (14 cores and 56 GB RAM + 2 GPU Nvidia K20m)
goliath_gpu (14 cores and 56 GB RAM + 1 GPU Nvidia K20m)
large_2gpu (4 cores and 4 GB RAM + 2 GPU Nvidia K20m)
large_gpu (4 cores and 4 GB RAM + 1 GPU Nvidia K20m)
mammoth_2gpu (14 cores and 32 GB RAM + 2 GPU Nvidia K20m)
mammoth_gpu (14 cores and 32 GB RAM + 1 GPU Nvidia K20m)
medium_2gpu (2 cores and 2 GB RAM + 2 GPU Nvidia K20m)
medium_gpu (2 cores and 2 GB RAM + 1 GPU Nvidia K20m)
mem_extra_large_2gpu (8 cores and 32 GB RAM + 2 GPU Nvidia K20m)
mem_extra_large_gpu (8 cores and 32 GB RAM + 1 GPU Nvidia K20m)
mem_large_2gpu (4 cores and 16 GB RAM + 2 GPU Nvidia K20m)
mem_large_gpu (4 cores and 16 GB RAM + 1 GPU Nvidia K20m)
mem_medium_2gpu (2 cores and 8 GB RAM + 2 GPU Nvidia K20m)
mem_medium_gpu (2 cores and 8 GB RAM + 1 GPU Nvidia K20m)
mem_small_2gpu (1 core and 4 GB RAM + 2 GPU Nvidia K20m)
mem_small_gpu (1 core and 4 GB RAM + 1 GPU Nvidia K20m)
small_2gpu ( 1 core and 1 GB RAM + 2 GPU Nvidia K20m)
small_gpu (1 core and 1 GB RAM + 1 GPU Nvidia K20m)


Instantiate GPGPUs VMs

Creating VMs with GPGPUs is done as with every other VM, you just need to select the appropriate template. First of all, set up your interface following the CLI setup guide.

Then you can discover which templates do support GPGPU by describing them:

occi --endpoint  $OCCI_ENDPOINT \
     --auth x509 --user-cred $X509_USER_PROXY --voms \
     --action describe --resource resource_tpl

This will show you the list of templates with a short description, look for the ones with gpu in the description listed in the table above, e.g.:

[[ http://schemas.openstack.org/template/resource#f0cd78ab-10a0-4350-a6cb-5f3fdd6e6294 ]] 
title:        Flavor: gpu1cpu6
term:         f0cd78ab-10a0-4350-a6cb-5f3fdd6e6294
location:     /mixin/f0cd78ab-10a0-4350-a6cb-5f3fdd6e6294/

You can start any VM, for testing you may use one with CUDA support like the EGI-Cuda appliance. Start your VM with the selected templates, make sure that you add proper context information to be able to login (check the fedcloud FAQ for more information):

RES_TPL=<set this to the selected resource template>
OS_TPL=<set this to the selected VM image>
occi --endpoint  $OCCI_ENDPOINT \
     --auth x509 --user-cred $X509_USER_PROXY --voms \
     --action create --resource compute \
     --mixin $OS_TPL --mixin $RES_TPL \
     --attribute occi.core.title="Testing GPU" \
     --context <add here any contextualization>

If the available VA do not suite your needs, you can install NVIDIA driver and CUDA toolkit to a VM. They are available at http://www.nvidia.com/Download/index.aspx and https://developer.nvidia.com/cuda-downloads. See NVIDIA_CUDA_installer for a sample script to install on debian based VMs

Create your own GPGPU Virtual Appliances

You can create and upload your Virtual Appliances with your applications that can be replicated to sites once endorsed by a VO.

We recommend using a tool like packer for creating the images. Check CUDA packer file of the VMI endorsement repo for a working configuration to create such Virtual Appliance. Starting from the example, you can add new provisioners to install your applications.