Alert.png The wiki is deprecated and due to be decommissioned by the end of September 2022.
The content is being migrated to other supports, new updates will be ignored and lost.
If needed you can get in touch with EGI SDIS team using operations @

GPGPU Working Group

From EGIWiki
(Redirected from GPGPU-WG)
Jump to navigation Jump to search
EGI Activity groups Special Interest groups Policy groups Virtual teams Distributed Competence Centres

Coordinator: Miroslav Ruda/CESNET, John Walsh/TCD

Meetings page: Agendas

Mailing list: vt-gpgpu (at)

Duration: Sep 2013



Over the summer of 2012, the EGI GPGPU Virtual Team developed and distributed two surveys - a Grid User oriented survey and a Resource Centre (RC) oriented survey - on the current and future use of General Purpose Graphics Processing Units (GPGPU) on the European Grid Infrastructure (EGI). The key results of the surveys, which were presented at the EGI Technical Forum 2013, showed that:

  • many Resource Centres had already deployed GPGPUs or had planned to do so
  • Users would be interested in using these resources.

However, currently there is no coherent or standardised means for discovering or accessing GPGPU resources on the EGI or similarly related grid infrastructures. Integration of such resources is a non-trivial task. These include:

  • Batch System integration best practices;
  • the evaluation and adoption of an appropriate GlueSchema;
  • GPGPU resource usage accounting;
  • GPGPU resource availability and reliability testing.


EGI Technical Group/Working Group will study

  • Batch System integration best practices
  • the proposal and evaluation of an appropriate GlueSchema suitable for describing generalised “Computational Accelerators”.


Membership of the Group is open to Resource Centre representatives who can contribute directly to either of these objectives. User Community representation and expertise from GlueSchema developers is required to aid with the development and evaluation of the GlueSchema definition.



Testbed Contacts:

  • Derek Ross/STFC, Emerald GPU Cluster with 372 Nvidia M2090
  • Andrea Sartirana/GRIF, 4 servers, each one with 2 accelerators (4 nvidia K20 and 4 Xeon Phi)
  • Oleksandr Savytskyi/, gromacs 4.6.3 testing (native GPU support) on GPU\Xeon Phi clusters
  • Miguel Cardenas/CIEMAT (applications: DES - Dark Energy Survey, artificial intelligence)
  • Mariusz Sterzel,Lukasz Flis/Cyfronet: Application Area - Computational Chemistry amd Biology (Nodes with 2 or 8 GPGPU cards)
  • Emanouil Atanassov/BG01-IPP: 16 NVIDIA M2090

User Communities

  • WeNMR (MD simulations via GROMACS)
  • MolDynGrid (MD simulations)
  • AstroPhysics (Contact Miguel Cárdenas)



Use Cases

Information about GPGPU use cases was collected in a survey back in 2012 organized by the GPGPU virtual team: go to Use Cases

Useful Links