EGI-Engage:AMB-2015-12-02
Activity Reports
WP1 (NA1) (Yannick Legre)
Milestones & Deliverables
Task NA1.1 Administrative and Financial Management
Task NA1.2 Technical Management
Task NA1.3 Quality and Risk Management
WP2 (NA2) (Sergio Andreozzi)
Milestones & Deliverables
Task NA2.1 Communication and Dissemination
Task NA2.2 Strategy, Business Development and Exploitation
Task NA2.3 SME/Industry Engagement and Big Data Value Chain
WP3 (JRA1) (Diego Scardaci)
Milestones & Deliverables
Task JRA1.1 Authentication and Authorisation Infrastructure
Task JRA1.2 Service Registry and Marketplace
Task JRA1.3 Accounting
- Survey to get requirements for Data Accounting
- Work on Storage metrics
- Association tables for cloud
- Accounting Portal: first demo at the EGI CF 2015.
- Demo of the Portal in the ATB
Task JRA1.4 Operations Tools
Task JRA1.5 Resource Allocation – e-GRANT
- testing phase of a new platform gathering Resource allocation instance and pfu instance: https://e-grant.egi.eu/v2/
WP4 (JRA2) (Matthew Viljoen)
Milestones & Deliverables
Task JRA2.1 Federated Open Data
Detailed specification of Open Data use cases
Task JRA2.2 Federated Cloud
- ooi first relase (https://appdb.egi.eu/store/software/ooi https://launchpad.net/ooi/occi-1.1/0.1)
- Conclusion of deliverables D4.2, D4.3.
- Information System initial discussions.
Task JRA2.3 e-Infrastructures Integration
- Start preparation of deliverables D4.5 and D4.6
- Analysis of requirements for accounting for D4Science
Task JRA2.4 Accelerated Computing
- Accelerated Computing in Grid
- Troubleshooting and fixing some misconfiguration at CIRMMP testbed
- Deployed MoBrain dockerized DisVis application at CIRMMP testbed. MoBrain users (through enmr.eu VO) can now run DisVis exploiting the GPU cluster at CIRMMP via the GPU-enabled CREAM-CE
- Start prepararing the process of certification: investigating the use of IM (UPV tool) for automatically deploying cluster on the EGI FedCloud to be used for the GPU-enabled CREAM-CE certification
- Accelerated Computing in Cloud
- Created new authentication module for logging into Horizon dashboard via keystone token
- Various client tools: getting token, installing nvidia+cuda,
WP5 (SA1) (Peter Solagna)
Milestones & Deliverables
Task SA1.1 Operations Coordination
Task SA1.2 Development of Security Operations
Task SA1.3 Integration, Deployment of Grid and Cloud Platforms
WP6 (SA2) (Gergely Sipos)
Milestones & Deliverables
Task SA2.1 Training
Task SA2.2 Technical User Support
Task SA2.3 ELIXIR
Task SA2.4 BBMRI
Task SA2.5 MoBrain
- A docker image with opencl + nvidia drivers + DisVis application ready to run on GPU servers has been prepared for Ubuntu, with the goals of checking performances described here.
- In collaboration with task JRA2.4 (see above) the DisVis docker image has been ported to the SL6 GPU servers forming the grid-enabled cluster at CIRMMP, and successful grid submissions to this cluster using the enmr.eu VO has been carried out with the expected performances the 1st of December.
- Here follows a description of the scripts and commands used to run the test
voms-proxy-init --voms enmr.eu $ glite-ce-job-submit -o jobid.txt -a -r cegpu.cerm.unifi.it:8443/cream-pbs-batch disvis.jdl
where disvis.jdl is:
[ executable = "disvis.sh"; inputSandbox = { "disvis.sh" ,"O14250.pdb" , "Q9UT97.pdb" , "restraints.dat" }; stdoutput = "out.out"; outputsandboxbasedesturi = "gsiftp://localhost"; stderror = "err.err"; outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"}; GPUNumber=2; ]
and disvis.sh is:
!/bin/sh echo hostname=$(hostname) echo user=$(id) export WDIR=`pwd` echo docker run opencl_disvis... echo starttime=$(date) docker run --device=/dev/nvidia0:/dev/nvidia0 --device=/dev/nvidia1:/dev/nvidia1 \ --device=/dev/nvidiactl:/dev/nvidiactl -v $WDIR:/home opencl_disvis /bin/sh \ -c 'export LD_LIBRARY_PATH=/usr/local/lib64; disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/res-gpu' echo endtime=$(date) nvidia-smi --query-accounted-apps=pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv tar cfz res-gpu.tgz res-gpu
The example input files specified in the jdl script above were taken from the GitHub repo for disvis available from: https://github.com/haddocking/disvis
The performance on the GPGPU grid resources is what is expected for the card type.
The timings were compared to an in-house GPU node at Utrecht University (GTX680 card):
GPGPU-type Timing[minutes] GTX680 19 2xK20 12 1xK20 12
This indicates that disvis is currently not making use of both available GPGPUs.