Alert.png The wiki is deprecated and due to be decommissioned by the end of September 2022.
The content is being migrated to other supports, new updates will be ignored and lost.
If needed you can get in touch with EGI SDIS team using operations @ egi.eu.

Competence centre MoBrain

From EGIWiki
Jump to navigation Jump to search
EGI-Engage Competence centres: Main page ELIXIR BBMRI MoBrain DARIAH LifeWatch EISCAT_3D EPOS Disaster Mitigation | EGI-Engage Knowledge Commons




Mobrain.png


MoBrain: A Competence Center to Serve Translational Research from Molecule to Brain



CC Coordinator: Alexandre M.J.J. Bonvin

CC Coordinator deputy: Antonio Rosato

CC members' list: cc-mobrain AT mailman.egi.eu

CC meetings: https://indico.egi.eu/indico/categoryDisplay.py?categId=145



Introduction

Today’s translational research era calls for innovative solutions to enable researchers and clinicians to build bridges between the microscopic (molecular) and macroscopic (human) scales. This requires building both transversal and vertical connections between techniques, researchers and clinicians to provide them with an optimal e-Science toolbox to tackle societal challenges related to health.

The main objective of the MoBrain Competence Center (CC) is to lower barriers for scientists to access modern e-Science solutions from micro to macro scales. MoBrain builds on grid- and cloud-based infrastructures and on the existing expertise available within WeNMR (www.wenmr.eu), N4U (neugrid4you.eu), and technology providers (NGIs and other institutions, OSG). This initiative aims to serve its user communities, related ESFRI projects (e.g. INSTRUCT) and in the long term the Human Brain Project (FET Flagship), and strengthen the EGI services offering.

By integrating molecular structural biology and medical imaging services and data, MoBrain will kick-start the development of a larger, integrated, global science virtual research environment for life and brain scientists worldwide. The mini-projects defined in MoBrain are geared toward facilitating this overall objective, each with specific objectives to reinforce existing services, develop new solutions and pave the path to global competence center and virtual research environment for translational research from molecular to brain.

There are already many services and support/training mechanisms in place that will be further developed, optimized and merged during the operation of the CC, building onto and contributing to the EGI service offering. MoBrain will produce a working environment that will be better tailored to the end user needs than any of its individual components. It will provide an extended portfolio of tools and data in a user-friendly e-laboratory, with a direct relevance for neuroscience, starting from the quantification of the molecular forces, protein folding, biomolecular interactions, drug design and treatments, improved diagnostic and the full characterization of every pathological mechanism of brain diseases through both phenomenological as well as mechanistic approaches.


MoBrain partners

  • Utrecht University, Bijvoet Center for Biomolecular Research, the Netherlands
  • Consorzio Interuniversitario Risonanze Magnetiche Di Metalloproteine Paramagnetiche, Florence University, Italy.
  • Consejo Superior de Investigaciones Cientificas (and Spanish NGI)
  • Science and Technology Facility Council, UK
  • Provincia Lombardo Veneta Ordine Ospedaliero di SanGiovanni di Dio – Fatebenefratelli, Italy
  • GNUBILA, France
  • Istituto Nazionale Di Fisica Nucleare, Italy
  • SURFsara (Dutch NGI)
  • CESNET (Czech NGI)
  • Open Science Grid (US)

The CC is open for additional members. Please email the CC coordinator to join.


Tasks

T1: Cryo-EM in the cloud: bringing clouds to the data

T2: GPU portals for biomolecular simulations

T3: Integrating the micro- (WeNMR/INSTRUCT) and macroscopic (NeuGRID4you) virtual research communities

T4: User support and training

Deliverables

  • D6.4 Fully integrated MoBrain web portal (OTHER), M12 (02.2016) - by T3
  • D6.7 Implementation and evaluation of AMBER and/or GROMACS (R), M13 (03.2016) - by T2
  • D6.12 GPGPU-enabled web portal(s) for MoBrain (OTHER), M16 (06.2016) - by T2&3
  • D6.14 Scipion cloud deployment for MoBrain (OTHER), M21 (11.2016) - by T1

Technical documentations

How to run the DisVis and PowerFit docker images on the enmr.eu VO (updated 14th of July 2016)

  • A docker image with opencl + nvidia drivers + DisVis application and one with PowerFit application ready to run on GPU servers have been prepared for Ubuntu, with the goals of checking performance described here.
  • In collaboration with EGI-Engage task JRA2.4 (Accelerated Computing) the DisVis and PowerFit docker images have been then re-build to work on the SL6 GPU servers which had a NVIDIA driver 319.x supporting only CUDA 5.5 version. These servers form the grid-enabled cluster at CIRMMP, and successful grid submissions to this cluster using the enmr.eu VO has been carried out with the expected performance the 1st of December 2015.
  • The 14th of July 2016 the CIRMMP servers were updated with the latest NVIDIA driver 352.93 supporting now CUDA 7.5 version. DisVis and PowerFit images used are now the ones maintained by the INDIGO-DataCloud project and distributed via dockerhub at indigodatacloudapps repository.
  • Here follows a description of the scripts and commands used to run the test:
$ voms-proxy-init --voms enmr.eu 
$ glite-ce-job-submit -o jobid.txt -a -r cegpu.cerm.unifi.it:8443/cream-pbs-batch disvis.jdl

where disvis.jdl is: 

[
executable = "disvis.sh";
inputSandbox = { "disvis.sh" ,"O14250.pdb" , "Q9UT97.pdb" , "restraints.dat" };
stdoutput = "out.out";
outputsandboxbasedesturi = "gsiftp://localhost";
stderror = "err.err";
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};
GPUNumber=1;
]

and disvis.sh is (assuming docker engine is installed on the grid WNs):

#!/bin/sh
driver=$(nvidia-smi | awk '/Driver Version/ {print $6}')
echo hostname=$(hostname) 
echo user=$(id) 
export WDIR=`pwd` 
mkdir res-gpu
echo docker run disvis... 
echo starttime=$(date)
rnd=$RANDOM 
docker run --name=disvis-$rnd --device=/dev/nvidia0:/dev/nvidia0 --device=/dev/nvidia1:/dev/nvidia1 \
           --device=/dev/nvidiactl:/dev/nvidiactl --device=/dev/nvidia-uvm:/dev/nvidia-uvm \
           -v $WDIR:/home indigodatacloudapps/disvis:nvdrv_$driver /bin/sh \
           -c 'disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/res-gpu; \
               nvidia-smi --query-accounted-apps=timestamp,pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv' 
docker rm disvis-$rnd
echo endtime=$(date) 
tar cfz res-gpu.tgz res-gpu

Important update of the 19th September 2016:

If docker engine is not available on the grid WNs, you can use the INDIGO-DataCloud "udocker" tool. This has the advantage that docker containers are run in the user space, so the grid user does not obtain root privileges in the WN, avoiding this way any security concern. The disvis.sh file in this case is as below:

#!/bin/sh
driver=$(nvidia-smi | awk '/Driver Version/ {print $6}')
echo hostname=$(hostname)
echo user=$(id)
export WDIR=`pwd`
echo udocker run disvis...
echo starttime=$(date)
git clone https://github.com/indigo-dc/udocker
cd udocker
./udocker.py pull indigodatacloudapps/disvis:nvdrv_$driver
echo time after pull = $(date)
rnd=$RANDOM
./udocker.py create --name=disvis-$rnd indigodatacloudapps/disvis:nvdrv_$driver
echo time after udocker create = $(date)
mkdir $WDIR/out
./udocker.py run -v /dev --volume=$WDIR:/home disvis-$rnd "disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/out; \
nvidia-smi --query-accounted-apps=timestamp,pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv"
echo time after udocker run = $(date)
./udocker.py rm disvis-$rnd
./udocker.py rmi indigodatacloudapps/disvis:nvdrv_$driver
cd $WDIR
tar zcvf res-gpu.tgz out/
echo endtime=$(date)

The example input files specified in the jdl script above were taken from the GitHub repo for disvis available from: https://github.com/haddocking/disvis

The performance on the GPGPU grid resources is what is expected for the card type.

The timings were compared to an in-house GPU node at Utrecht University (GTX680 card), but can differ with the last update of the DisVis code:

GPGPU-type Timing[minutes]
GTX680     19
M2090 (VM) 15.5
1xK20 (VM) 13.5
2xK20      11
1xK20      11

This indicates that DisVis, as expected, is currently incapable of using both the available GPGPUs. However, we plan to use this testbed to parallelize it on mulitple GPUs, which should be relatively straightforward. Values marked with (VM) refers to GPGPUs hosted in the FedCloud (see next section).

For running PowerFit just replace disvis.jdl with a powerfit.jdl like e.g.: 

[
executable = "powerfit.sh";
inputSandbox = { "powerfit.sh" ,"1046.map" , "GroES_1gru.pdb" };
stdoutput = "out.out";
outputsandboxbasedesturi = "gsiftp://localhost";
stderror = "err.err";
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};
GPUNumber=1;
]

with powerfit.sh as here and input data taken from here (courtesy of Mario David)

How to run the DisVis and PowerFit on VMs of the Federated Cloud using the enmr.eu VO (updated 14th of July 2016)

  • Scripts are available to instantiate GPGPU-enabled VMs on IISAS (CentOS7 images with Tesla K20m GPUs) and CESNET (Ubuntu images with M2090 GPUs) FedCloud sites, installing latest NVIDIA drivers and DisVis and/or PowerFit software: CESNET-create-VM.sh and IISAS-create-VM.sh
  • The scripts use the following user_data to contextualise the VMs in order to be ready to install the needed software: user_data_ubuntu and user_data_centos7. Customise them by inserting your public ssh-key.
  • After having run the scripts above (voms-proxy-init -voms enmr.eu -r is required before executing them, from a host with an occi client installed), you have to ssh in the VM with your ssh-key, then sudo su -, and then execute ./install-gpu-driver.sh. After that the VM is ready to install the application software,by executing ./install-disvis.sh and/or ./install-powerfit.sh. Scripts for running test samples are available in /home/run-disvisGPU.sh and /home/run-powerfitGPU.sh.