Difference between revisions of "GPGPU-WG KnowledgeBase Batch Schedulers Torque"
Line 10: | Line 10: | ||
wn001.example.com np=8 gpus=2 | wn001.example.com np=8 gpus=2 | ||
</nowiki> | </nowiki> | ||
== NVidia Support == | |||
If you are using high-end Nvidia GPGPUs (Kepler/Fermi), torque can extract and publish information about the status of these cards. However, you must | |||
re-compile torque to support these features. See: [[http://docs.adaptivecomputing.com/torque/archive/4-0-1/help.htm#topics/3-nodes/NVIDIAGPGPUs.htm NVidia GPGPU support]]. This information could be quite useful for GIP plugins to determine the status of the available GPGPUs. | |||
'''Caveat''' This feature does not work with mid-to-low range GTX cards. |
Revision as of 18:27, 27 January 2014
The latest production version of Torque (version 4.2.6) is not widely used in the production EGI Grid. Further information on configuring support for GPGPUs can be found under [Torque GPGPU scheduling]
Support for GPGPUs was introducted in torque 2.5.6. The number of GPGPUs made available on the client node is controlled through the torque 'nodes file on the torque server (normally /var/spool/torque/server_priv/nodes). For example, to indicate that 2 GPGPUs are available on wn001.example.com, we would set the following in the nodes file:
wn001.example.com np=8 gpus=2
NVidia Support
If you are using high-end Nvidia GPGPUs (Kepler/Fermi), torque can extract and publish information about the status of these cards. However, you must re-compile torque to support these features. See: [NVidia GPGPU support]. This information could be quite useful for GIP plugins to determine the status of the available GPGPUs.
Caveat This feature does not work with mid-to-low range GTX cards.