Difference between revisions of "GPGPU-FedCloud"

From EGIWiki
Jump to: navigation, search
(Progress)
(Progress)
Line 127: Line 127:
  
 
  Performance results:
 
  Performance results:
  ested application runs 6% slower in virtual machine compared to direct run on tested server.
+
  Tested application runs 6% slower in virtual machine compared to direct run on tested server.
  
 
* June 2015
 
* June 2015

Revision as of 12:02, 10 July 2015

Status of accelerated computing in Clouds

Need efforts for additional development/support at all levels

  • Chipset : HW virtualization support (otherwise some limitation)
  • OS level: correct kernel configuration for the accelerators
  • Hypervisor: configuration pass-through, vGPU
  • CMFs: VM start, scheduler
  • FedCloud facilities: accounting, information discovery
  • Application: VM images with correct drivers for specific chipsets

Accelerators

GPGPU (General-Purpose computing on Graphical Processing Units)

NVIDIA GPU/Tesla/GRID, AMD Radeon/FirePro, Intel HD Graphics,...

Virtualization using VGA pass-through, vGPU (GPU partitioning) - NVIDIA GRID accelerators

Intel Many Integrated Core Architecture

Xeon Phi Coprocessor

Virtualization using PCI pass-through

Specialized PCIe cards with accelerators

DSP (Digital Signal Processors)

FPGA (Field Programmable Gate Array)

Not commonly used in cloud environment

Hypervisors

QEMU/KVM

Supports only pass-through virtualization model

vGPU support is under development

Instructions for configuring passthrough in KVM (link ???)

Citrix XenServer 6, VMware ESXi 5.1

Support both pass-through and vGPU virtualization models

Limitations:

  • vGPU support require certified server HW
  • Live VM migration is not supported
  • VM snapshot with memory is not supported

Security issues

Cloud Management Frameworks

Some work done with PCI passthrough

vGPU is in very early stage

Work to be done:

  • Define VM types/flavors with attributes for GPGPU
  • Modify VM start to allow passthrough or allocate vGPU
  • Modify scheduler to allocate VMs with GPGPU correctly

VM images

VM images should contain proper drivers and libraries for specific accelerators

  • Not transferable from site to site

More suitable approach is to use vanilla images with GPU support provided by cloud provider

  • Using VM contextualization like cloud-init for installing applications

Or using VM snapshots

  • May require support from site admins

FedCloud facilities

AppDB

  • VM images are rather site-specific: any sense to use AppDB ?

Information discovery

  • Should use similar GLUE2 scheme like grid sites with GPGPU

Accounting

  • How to account GPU? (again to coordinate with grid)

Brokering, monitoring, VM management

Possible configuration

Dedicated cloud site with GPGPU

Homogenous: identical working nodes

Single VM type, single VM per node

  • Simple configuration, no conflicting resources, no need to modify scheduler

Cloud site with OS level hypervisor

VMs can have direct access to hardware resources and share them

Limitation to the same OS/kernel

Related work

Progress

  • May 2015
    • Review of available technologies
    • GPGPU virtualisation in KVM/QEMU
    • Performance testing of passthrough
HW configuration: 
IBM dx360 M4 server with two NVIDIA Tesla K20 accelerators.
Ubuntu 14.04.2 LTS with KVM/QEMU, PCI passthrough virtualization of GPU cards.
Tested application:
NAMD molecular dynamics simulation, STMV test example.
Performance results:
Tested application runs 6% slower in virtual machine compared to direct run on tested server.
  • June 2015
    • Creating cloud site with GPGPU support
Configuration: master node, 2 worker nodes (IBM dx360 M4 server, see above)
Base OS: Ubuntu 14.04.2 LTS
Hypervisor: KVM
Middleware: Openstack Kilo
  • Next steps
    • Testing Openstack scheduler
    • Integration with EGI-Engage Fedcloud