VT GPGPU/DraftSurvey
EGI User Community GPGPU Survey
Overview
The use of General Purpose Graphics Processing Units (GPGPUs) or accelerator devices, such as Intel's Xeon Phi co-processor, over the past few years has resulted in a huge increase in their exploitation by all the major scientific disciplines. With three of the top 10 clusters in the current June 2012 Top500 supercomputers list using NVIDIA GPGPUs, we would expect the number of GPGPU deployments at grid resource centres to grow significantly over the next few years. Moreover
The purpose of this survey is to gauge how the users currently use, or intend to use, GPGPUs or other accelerated devices in Grid or hybrid Grid/Cloud environments, or whether they shall use grids for this purpose. In particular, we would like to determine if there is a specific need from the User Communities for a more tightly integrated GPGPU capability within the grid environment. In addition, we welcome further user comments and feedback on any other aspects.
All data collected from the survey feedback will be processed anonymously.
User Profile
Q1 Which Scientific Discipline do you work in?
Text Box
Q2 Do you currently use grid or cloud technologies?
Yes
No (Please comment why?)
Comment Box
Q3 Do you use GPGPU based applications for your scientific computations ?
Yes
No (Please comment why?)
Comment Box
Q4 What is the speed-up achieved compared with not-parallel computing?
Question Logic: if Q3 was Yes
Text box
Q5 Do you intend to use them within the next 18 months ?
No
Yes (Please comment the time frame e.g. 3 months)
Comment box
Q6 Would you like to access GPGPU based resources through the European Grid Infrastructure (EGI) ?
Yes
No (Please comment why?)
Comment box
Application Development
Please Answer this section if you have answered yes to Q3.
If you have any other use cases you would like to bring our attention, please send us an email to : vt-gpgpu_AT_egi.eu.
Q7 Do you develop or intend to develop any GPGPU based applications?
Yes
No (Comment why?)
Comment
Q8 What do you expect from high level programming languages abstractions to write parallel programs ? Is there a need for such languages ?
Text box
Q9 What particular numerical methods would you like to use on GPGPUs ? (e.g. CUBLAS for linear algebra, CUSPARSE for sparse matrices, CURAND for pseudo random number generation, NPP, etc.)
Text box
Q10 What Application Programming Interface do/will you use? (e.g. CUDA, OpenCL, OpenACC etc.)
Text box
Q11 Do you intend to develop code which depends on other application frameworks (e.g MPI, BLAST etc.)
No
Yes (Comment which ones?)
Comment
Q12 Are there any particular GPGPU applications/solvers/libraries/methods you would like to have on a GPGPU cluster?
No
Yes (Comment which ones?)
Comment
Q13 Is there a need on the market to implement additional applications/solvers/libraries/methods?
No
Yes (Provide an example as a comment below)
Comment
Q14 Are you performing hybrid computations (CPU + GPU), or CPU is just used for IO, communication and computation managing ?
Yes
No (Comment why?)
Comment
Q15 What is the optimal ratio between the number of GPUs and CPU cores for your application?
Question Logic: if Q14 was Yes
Text box
Q16: If you are utilizing all GPU resources on a node, do you expect, to be an exclusive user on this node (cost of PCI-Express and RAM bandwidth sharing, etc.)?
No
Yes (Comment why?)
Comment
Q17: Do you expect any particular network topology on the cluster or utilities to exchange the data between the nodes (e.g. NVIDIA GPUDirect)?
No
Yes (Comment why?)
Comment
Resource Centres GPGPU administration
These questions are intended to be answered by Resource Centre Administrators. The goal here is determine how widespread GPGPUs are (or will be) provided in the infrastructure; how the site administrators configure these resources in the batch system/scheduler; allocation policies; how these GPGPU are made visible on the grid infrastructure, etc.
Q1 Does your site currently provide GPGPU resources or does it intend to provide GPGPU resources? ANS; Yes + optional text or No
Q2 Which LRMS (i.e batch system and job scheduler do you use)
Q3 How the GPU resources are seen from LRMS point of view:
a) manually configured consumable resource
b) the GPU management is natively supported by the LRMS
c) the LRMS is not aware of GPUs (i.e. user have to request whole node to get exclusive access to GPUs)
d) other?
Q4 Does every node has GPU or only a subset of them? ANS:YES/NO
Q5 Is access to GPGPU-enabled hosts restricted to specific users or groups?
Q6 Link to hardware description (number of GPU nodes, type of GPU devices, number of cards per node). ANS:TEXT
Q7 Link to end-user documentation (i.e. how to submit a GPU job). ANS:Text
Q8 Did you implement some additional mechanism to manage access to GPU devices (e.g. custom prolog/epilog scripts,http://sourceforge.net/projects/cudawrapper/, ...)? ANS:TEXT