Alert.png The wiki is deprecated and due to be decommissioned by the end of September 2022.
The content is being migrated to other supports, new updates will be ignored and lost.
If needed you can get in touch with EGI SDIS team using operations @ egi.eu.

GPGPU-OpenNebula

From EGIWiki
Revision as of 11:31, 12 April 2017 by Astalos (talk | contribs) (Created page with "{{Template:EGI-Engage menubar}} {{TOC_right}} = Objective = To provide testing Cloud site based on OpenNebula middleware for testing GPGPU support. = Current status = IISAS-...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
EGI-Engage project: Main page WP1(NA1) WP3(JRA1) WP5(SA1) PMB Deliverables and Milestones Quality Plan Risk Plan Data Plan
Roles and
responsibilities
WP2(NA2) WP4(JRA2) WP6(SA2) AMB Software and services Metrics Project Office Procedures



Objective

To provide testing Cloud site based on OpenNebula middleware for testing GPGPU support.

Current status

IISAS-Nebula site has been integrated to EGI Federated Cloud and is accessible using acc-comp.egi.eu VO.

HW configuration:

Management services: OpenNebula Cloud controller and Site BDII in virtual servers
IBM System x3250 M5, 1x Intel(R) Xeon(R) CPU E3-1241 v3 @ 3.50GHz, 16 RAM, 1TB Disk

1 computing node: IBM dx360 M4 server with two NVIDIA Tesla K20 accelerators.
CentOS 7 with KVM/QEMU, PCI passthrough virtualization of GPU cards.

2.8TB block storage via NFS

SW configuration:

Base OS: CentOS 7
Hypervisor: KVM
Middleware: OpenNebula 5.0.2
OCCI server: rOCCI-server 2.0.0

GPU-enabled flavors:

extra_large_2gpu      Extra Large Instance - 8 cores and 8 GB RAM + 2 GPU Nvidia K20m
extra_large_gpu       Extra Large Instance - 8 cores and 8 GB RAM + 1 GPU Nvidia K20m
goliath_2gpu          Goliath Instance - 14 cores and 56 GB RAM + 2 GPU Nvidia K20m
goliath_gpu           Goliath Instance - 14 cores and 56 GB RAM + 1 GPU Nvidia K20m
large_2gpu            Large Instance - 4 cores and 4 GB RAM + 2 GPU Nvidia K20m
large_gpu             Large Instance - 4 cores and 4 GB RAM + 1 GPU Nvidia K20m
mammoth_2gpu          Mammoth Instance - 14 cores and 32 GB RAM + 2 GPU Nvidia K20m
mammoth_gpu           Mammoth Instance - 14 cores and 32 GB RAM + 1 GPU Nvidia K20m
medium_2gpu           Medium Instance - 2 cores and 2 GB RAM + 2 GPU Nvidia K20m
medium_gpu            Medium Instance - 2 cores and 2 GB RAM + 1 GPU Nvidia K20m
mem_extra_large_2gpu  Extra Large Instance - 8 cores and 32 GB RAM + 2 GPU Nvidia K20m
mem_extra_large_gpu   Extra Large Instance - 8 cores and 32 GB RAM + 1 GPU Nvidia K20m
mem_large_2gpu        Large Instance - 4 cores and 16 GB RAM + 2 GPU Nvidia K20m
mem_large_gpu         Large Instance - 4 cores and 16 GB RAM + 1 GPU Nvidia K20m
mem_medium_2gpu       Medium Instance - 2 cores and 8 GB RAM + 2 GPU Nvidia K20m
mem_medium_gpu        Medium Instance - 2 cores and 8 GB RAM + 1 GPU Nvidia K20m
mem_small_2gpu        Small Instance - 1 core and 4 GB RAM + 2 GPU Nvidia K20m
mem_small_gpu         Small Instance - 1 core and 4 GB RAM + 1 GPU Nvidia K20m
small_2gpu            Small Instance - 1 core and 1 GB RAM + 2 GPU Nvidia K20m
small_gpu             Small Instance - 1 core and 1 GB RAM + 1 GPU Nvidia K20m

EGI federated cloud configuration:

GOCDB: IISAS-Nebula, https://goc.egi.eu/portal/index.php?Page_Type=Site&id=1785
ARGO monitoring: http://argo.egi.eu/lavoisier/status_report-sf?site=IISAS-Nebula&report=Critical&accept=html
OCCI endpoint: https://nebula2.ui.savba.sk:11443/
EGI AppDB: https://appdb.egi.eu/store/site/iisas-nebula
Supported VOs: acc-comp.egi.eu, ops, dteam