From EGIWiki
Revision as of 10:13, 2 December 2015 by Sarac (talk | contribs) (Task NA2.1 Communication and Dissemination)
Jump to: navigation, search
EGI-Engage project: Main page WP1(NA1) WP3(JRA1) WP5(SA1) PMB Deliverables and Milestones Quality Plan Risk Plan Data Plan
Roles and
WP2(NA2) WP4(JRA2) WP6(SA2) AMB Software and services Metrics Project Office Procedures


Activity Reports

WP1 (NA1) (Yannick Legre)

Dissamination reports

Milestones & Deliverables

Task NA1.1 Administrative and Financial Management

Task NA1.2 Technical Management

Task NA1.3 Quality and Risk Management

WP2 (NA2) (Sergio Andreozzi)

Dissemination reports

Milestones & Deliverables

  • D2.6: “Report on data sharing policies and legal framework in fishery and marine sciences data sector” (FAO)
  • D2.7: “Market Report on the fishery and marine sciences data sector” (ENG)

Task NA2.1 Communication and Dissemination

  • Preparations for Amsterdam 2016: initial plan
  • Preparations for Krakow 2016: setting up SC meeting
  • Commissioning of the newsletter: first steps
  • Preparation of news items (Ubercloud, 7IS)
  • New case study: co-infection in snake diseases!
  • Wrap up of CF2015 - report finished
  • Website redevelopment - Briefs sent and sitemap discussed
  • New photos

Task NA2.2 Strategy, Business Development and Exploitation

  • Service Management
    • SSB Call held
    • 1st paid FitSM Training course - Terradue

Task NA2.3 SME/Industry Engagement and Big Data Value Chain

WP3 (JRA1) (Diego Scardaci)

Dissamination reports

Milestones & Deliverables

  • Next Deliverables at PM12

Task JRA1.1 Authentication and Authorisation Infrastructure

  • By mid December GRNET will provide the IdP/SP proxy component
  • First SP using this proxy will be the GOCDB testing instance
  • EGI SSO and GRNET guest IdPs to be integrated in the pilot

Task JRA1.2 Service Registry and Marketplace

  • Assessment of technologies to implement the marketplace still on-going (waiting for the first draft doc from Dean)

Task JRA1.3 Accounting

  • Survey to get requirements for Data Accounting
  • Work on Storage metrics
  • Association tables for cloud
  • Accounting Portal: first demo at the EGI CF 2015.
  • Demo of the Portal in the ATB

Task JRA1.4 Operations Tools

Ops Portal


Task JRA1.5 Resource Allocation – e-GRANT

WP4 (JRA2) (Matthew Viljoen)

Dissamination reports

Milestones & Deliverables

Task JRA2.1 Federated Open Data

Detailed specification of Open Data use cases

Task JRA2.2 Federated Cloud

Task JRA2.3 e-Infrastructures Integration

  • Start preparation of deliverables D4.5 and D4.6
  • Analysis of requirements for accounting for D4Science

Task JRA2.4 Accelerated Computing

  • Accelerated Computing in Grid
    • Troubleshooting and fixing some misconfiguration at CIRMMP testbed
    • Deployed MoBrain dockerized DisVis application at CIRMMP testbed. MoBrain users (through VO) can now run DisVis exploiting the GPU cluster at CIRMMP via the GPU-enabled CREAM-CE
    • Start prepararing the process of certification: investigating the use of IM (UPV tool) for automatically deploying cluster on the EGI FedCloud to be used for the GPU-enabled CREAM-CE certification
  • Accelerated Computing in Cloud
    • Created new authentication module for logging into Horizon dashboard via keystone token
    • Various client tools: getting token, installing nvidia+cuda,

WP5 (SA1) (Peter Solagna)

Dissamination reports

Milestones & Deliverables

Task SA1.1 Operations Coordination

Task SA1.2 Development of Security Operations

Task SA1.3 Integration, Deployment of Grid and Cloud Platforms

WP6 (SA2) (Gergely Sipos)

Dissemination reports

Milestones & Deliverables

Task SA2.1 Training

Task SA2.2 Technical User Support


Task SA2.4 BBMRI

Task SA2.5 MoBrain

  • A docker image with opencl + nvidia drivers + DisVis application ready to run on GPU servers has been prepared for Ubuntu, with the goals of checking performances described here.
  • In collaboration with task JRA2.4 (see above) the DisVis docker image has been ported to the SL6 GPU servers forming the grid-enabled cluster at CIRMMP, and successful grid submissions to this cluster using the VO has been carried out with the expected performances the 1st of December.
  • Here follows a description of the scripts and commands used to run the test
$ voms-proxy-init --voms 
$ glite-ce-job-submit -o jobid.txt -a -r disvis.jdl

where disvis.jdl is: 

executable = "";
inputSandbox = { "" ,"O14250.pdb" , "Q9UT97.pdb" , "restraints.dat" };
stdoutput = "out.out";
outputsandboxbasedesturi = "gsiftp://localhost";
stderror = "err.err";
outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"};

and is:


echo hostname=$(hostname) 
echo user=$(id) 
export WDIR=`pwd` 
echo docker run opencl_disvis... 
echo starttime=$(date) 
docker run --device=/dev/nvidia0:/dev/nvidia0 --device=/dev/nvidia1:/dev/nvidia1 \
           --device=/dev/nvidiactl:/dev/nvidiactl -v $WDIR:/home opencl_disvis /bin/sh \
           -c 'export LD_LIBRARY_PATH=/usr/local/lib64; disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/res-gpu' 
echo endtime=$(date) 
nvidia-smi --query-accounted-apps=pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv
tar cfz res-gpu.tgz res-gpu

The example input files specified in the jdl script above were taken from the GitHub repo for disvis available from:

The performance on the GPGPU grid resources is what is expected for the card type.

The timings were compared to an in-house GPU node at Utrecht University (GTX680 card):

GPGPU-type Timing[minutes]
GTX680     19
2xK20      12
1xK20      12

This indicates that DisVis, as expected, is currently incapable of using both the available GPGPUs. However, we plan to use this testbed to parallelize it on multple GPUs, which should be relatively straightforward.


Task SA2.7 LifeWatch

Task SA2.8 EISCAT_3D

Task SA2.9 EPOS

Task SA2.10 Disaster Mitigation