Alert.png The wiki is deprecated and due to be decommissioned by the end of September 2022.
The content is being migrated to other supports, new updates will be ignored and lost.
If needed you can get in touch with EGI SDIS team using operations @ egi.eu.

Difference between revisions of "GPGPU-WG KnowledgeBase Batch Schedulers SchedulerScenarios"

From EGIWiki
Jump to navigation Jump to search
 
(29 intermediate revisions by 2 users not shown)
Line 1: Line 1:
== Single GPGPU per node ==
{{Template:Op menubar}} {{TOC_right}}
A simple batch setup that assumes a physical node and its componenent GPGPU card expose a single '''Job Slot''' would simplify Resource Centre setup. Each GPGPU node could be partitioned from the non-GPGPU nodes using an access-control-list. However, most modern physical nodes contain and expose multiple CPU-cores to the batch system. If the physical system supports '''Virtualisation''', a CPU-core could be allocated to the GPU on the phsical node, and a single virtual machine could expose the remainder of the job slots. For example: Assume the physical host (wn1) has 8-cores, we can configure the node to declare (in torque) "np=1" to the batch system. If we create a  VM with "np=7", then all cores can be allocated to the batch system.
[[Category:Task_forces]]


== Multiple GPGPUs per Physical Node ==
'''[[GPGPU_Working_Group| << GPGPU Working Group main page]]'''
Similar to the Virtualisation example above, a physical node with '''N'''-gpgpu cards could be configured with


''np=#NUM_OF_GPGPUS''
[[GPGPU-WG:GPGPU_Working_Group_KnowledgeBase | To Parent Page]]


A virtual machine could then present
* [[GPGPU-WG:GPGPU_Working_Group_KnowledgeBase:Batch_Schedulers:SchedulerScenarios:GPUOnlyQueue | GPGPU Queues assuming 1-core per GPGPU ]]
''np=#NUM_OF_CORES-#NUM_OF_GPGPUS'' job slots
* [[GPGPU-WG:GPGPU_Working_Group_KnowledgeBase:Batch_Schedulers:SchedulerScenarios:MixedCPU_GPU_Queue | GPGPUs as a subset of queue resources]]

Latest revision as of 16:02, 22 January 2015