Alert.png The wiki is deprecated and due to be decommissioned by the end of September 2022.
The content is being migrated to other supports, new updates will be ignored and lost.
If needed you can get in touch with EGI SDIS team using operations @ egi.eu.

Difference between revisions of "GPGPU-WG KnowledgeBase Batch Schedulers SchedulerScenarios"

From EGIWiki
Jump to navigation Jump to search
Line 1: Line 1:
== Single GPGPU per node ==
== Single GPGPU per node ==
A simple batch setup that assumes a physical node and its componenent GPGPU card expose a single '''Job Slot''' would simplify Resource Centre setup. Each GPGPU node could be partitioned from the non-GPGPU nodes using an access-control-list. However, most modern physical nodes contain and expose multiple CPU-cores to the batch system. If the physical system supports '''Virtualisation''', a CPU-core could be allocated to the GPU on the phsical node, and a single virtual machine could expose the remainder of the job slots. For example: Assume the physical host (wn1) has 8-cores, we can configure the node to declare (in torque) "np=1" to the batch system. If we create a  VM with "np=7", then all cores can be allocated to the batch system.
A simple batch setup that assumes a physical node and its componenent GPGPU card expose a single '''Job Slot''' would simplify Resource Centre setup. Each GPGPU node could be partitioned from the non-GPGPU nodes using an access-control-list. However, most modern physical nodes contain and expose multiple CPU-cores to the batch system. If the physical system supports '''Virtualisation''', a CPU-core could be allocated to the GPU on the phsical node, and a single virtual machine could expose the remainder of the job slots. For example: Assume the physical host (wn1) has 8-cores, we can configure the node to declare (in torque) "np=1" to the batch system. If we create a  VM with "np=7", then all cores can be allocated to the batch system.


== Multiple GPGPUs per Physical Node ==
== Multiple GPGPUs per Physical Node ==
Similar to the Virtualisation example above, a physical node with '''N'''-gpgpu cards could be configured with ''np=#NUM_OF_CORES-#NUM_OF_GPGPUS''
Similar to the Virtualisation example above, a physical node with '''N'''-gpgpu cards could be configured with  
 
''np=#NUM_OF_GPGPUS''
 
A virtual machine could then present
''np=#NUM_OF_CORES-#NUM_OF_GPGPUS'' job slots

Revision as of 23:05, 27 February 2014

Single GPGPU per node

A simple batch setup that assumes a physical node and its componenent GPGPU card expose a single Job Slot would simplify Resource Centre setup. Each GPGPU node could be partitioned from the non-GPGPU nodes using an access-control-list. However, most modern physical nodes contain and expose multiple CPU-cores to the batch system. If the physical system supports Virtualisation, a CPU-core could be allocated to the GPU on the phsical node, and a single virtual machine could expose the remainder of the job slots. For example: Assume the physical host (wn1) has 8-cores, we can configure the node to declare (in torque) "np=1" to the batch system. If we create a VM with "np=7", then all cores can be allocated to the batch system.

Multiple GPGPUs per Physical Node

Similar to the Virtualisation example above, a physical node with N-gpgpu cards could be configured with

np=#NUM_OF_GPGPUS

A virtual machine could then present np=#NUM_OF_CORES-#NUM_OF_GPGPUS job slots