Alert.png The wiki is deprecated and due to be decommissioned by the end of September 2022.
The content is being migrated to other supports, new updates will be ignored and lost.
If needed you can get in touch with EGI SDIS team using operations @ egi.eu.

EGI-Engage:AMB-2015-12-02

From EGIWiki
Jump to navigation Jump to search
EGI-Engage project: Main page WP1(NA1) WP3(JRA1) WP5(SA1) PMB Deliverables and Milestones Quality Plan Risk Plan Data Plan
Roles and
responsibilities
WP2(NA2) WP4(JRA2) WP6(SA2) AMB Software and services Metrics Project Office Procedures



Activity Reports

WP1 (NA1) (Yannick Legre)

Dissamination reports

Milestones & Deliverables

Task NA1.1 Administrative and Financial Management

Task NA1.2 Technical Management

Task NA1.3 Quality and Risk Management

WP2 (NA2) (Sergio Andreozzi)

Dissemination reports

Milestones & Deliverables

Task NA2.1 Communication and Dissemination

Task NA2.2 Strategy, Business Development and Exploitation

Task NA2.3 SME/Industry Engagement and Big Data Value Chain

WP3 (JRA1) (Diego Scardaci)

Dissamination reports

Milestones & Deliverables

  • Next Deliverables at PM12

Task JRA1.1 Authentication and Authorisation Infrastructure

  • By mid December GRNET will provide the IdP/SP proxy component
  • First SP using this proxy will be the GOCDB testing instance
  • EGI SSO and GRNET guest IdPs to be integrated in the pilot

Task JRA1.2 Service Registry and Marketplace

  • Assessment of technologies to implement the marketplace still on-going (waiting for the first draft doc from Dean)

Task JRA1.3 Accounting

  • Survey to get requirements for Data Accounting
  • Work on Storage metrics
  • Association tables for cloud
  • Accounting Portal: first demo at the EGI CF 2015.
  • Demo of the Portal in the ATB

Task JRA1.4 Operations Tools

Ops Portal

GOCDB

Task JRA1.5 Resource Allocation – e-GRANT

WP4 (JRA2) (Matthew Viljoen)

Dissamination reports

Milestones & Deliverables

Task JRA2.1 Federated Open Data

Detailed specification of Open Data use cases

Task JRA2.2 Federated Cloud

Task JRA2.3 e-Infrastructures Integration

  • Start preparation of deliverables D4.5 and D4.6
  • Analysis of requirements for accounting for D4Science

Task JRA2.4 Accelerated Computing

  • Accelerated Computing in Grid
    • Troubleshooting and fixing some misconfiguration at CIRMMP testbed
    • Deployed MoBrain dockerized DisVis application at CIRMMP testbed. MoBrain users (through enmr.eu VO) can now run DisVis exploiting the GPU cluster at CIRMMP via the GPU-enabled CREAM-CE
    • Start prepararing the process of certification: investigating the use of IM (UPV tool) for automatically deploying cluster on the EGI FedCloud to be used for the GPU-enabled CREAM-CE certification
  • Accelerated Computing in Cloud
    • Created new authentication module for logging into Horizon dashboard via keystone token
    • Various client tools: getting token, installing nvidia+cuda,

WP5 (SA1) (Peter Solagna)

Dissamination reports

Milestones & Deliverables

Task SA1.1 Operations Coordination

Task SA1.2 Development of Security Operations

Task SA1.3 Integration, Deployment of Grid and Cloud Platforms

WP6 (SA2) (Gergely Sipos)

Dissemination reports

Milestones & Deliverables

Task SA2.1 Training

Task SA2.2 Technical User Support

Task SA2.3 ELIXIR

Task SA2.4 BBMRI

Task SA2.5 MoBrain

  • A docker image with opencl + nvidia drivers + DisVis application ready to run on GPU servers has been prepared for Ubuntu, with the goals of checking performances described here.
  • In collaboration with task JRA2.4 (see above) the DisVis docker image has been ported to the SL6 GPU servers forming the grid-enabled cluster at CIRMMP, and successful grid submissions to this cluster using the enmr.eu VO has been carried out with the expected performances the 1st of December.
  • Here follows a description of the scripts and commands used to run the test
$ voms-proxy-init --voms enmr.eu 
$ glite-ce-job-submit -o jobid.txt -a -r cegpu.cerm.unifi.it:8443/cream-pbs-batch disvis.jdl

where disvis.jdl is: 

[
executable = "disvis.sh";
inputSandbox = { "disvis.sh" ,"O14250.pdb" , "Q9UT97.pdb" , "restraints.dat" };
stdoutput = "out.out";
outputsandboxbasedesturi = "gsiftp://localhost";
stderror = "err.err";
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};
GPUNumber=2;
]

and disvis.sh is:

!/bin/sh

echo hostname=$(hostname) 
echo user=$(id) 
export WDIR=`pwd` 
echo docker run opencl_disvis... 
echo starttime=$(date) 
docker run --device=/dev/nvidia0:/dev/nvidia0 --device=/dev/nvidia1:/dev/nvidia1 \
           --device=/dev/nvidiactl:/dev/nvidiactl -v $WDIR:/home opencl_disvis /bin/sh \
           -c 'export LD_LIBRARY_PATH=/usr/local/lib64; disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/res-gpu' 
echo endtime=$(date) 
nvidia-smi --query-accounted-apps=pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv
tar cfz res-gpu.tgz res-gpu

The example input files specified in the jdl script above were taken from the GitHub repo for disvis available from: https://github.com/haddocking/disvis

The performance on the GPGPU grid resources is what is expected for the card type.

The timings were compared to an in-house GPU node at Utrecht University (GTX680 card):

GPGPU-type Timing[minutes]
GTX680     19
2xK20      12
1xK20      12

This indicates that DisVis, as expected, is currently incapable of using both the available GPGPUs. However, we plan to use this testbed to parallelize it on multple GPUs, which should be relatively straightforward.

Task SA2.6 DARIAH

Task SA2.7 LifeWatch

Task SA2.8 EISCAT_3D

Task SA2.9 EPOS

Task SA2.10 Disaster Mitigation