EGI-Engage:AMB-2015-12-02
Revision as of 10:17, 2 December 2015 by Syh (talk | contribs) (→Task NA2.2 Strategy, Business Development and Exploitation)
Activity Reports
WP1 (NA1) (Yannick Legre)
Milestones & Deliverables
Task NA1.1 Administrative and Financial Management
Task NA1.2 Technical Management
Task NA1.3 Quality and Risk Management
WP2 (NA2) (Sergio Andreozzi)
Milestones & Deliverables
- D2.6: “Report on data sharing policies and legal framework in fishery and marine sciences data sector” (FAO)
- ToC available at: https://documents.egi.eu/document/2699
- Need to confirm moderators and reviewers now for early feedback to ensure output of deliverable is useful for EGI
- D2.7: “Market Report on the fishery and marine sciences data sector” (ENG)
- ToC available at: https://documents.egi.eu/document/2700
Task NA2.1 Communication and Dissemination
- Preparations for Amsterdam 2016: initial plan
- Preparations for Krakow 2016: setting up SC meeting
- Commissioning of the newsletter: first steps
- Preparation of news items (Ubercloud, 7IS)
- New case study: co-infection in snake diseases!
- Wrap up of CF2015 - report finished
- Website redevelopment - Briefs sent and sitemap discussed
- New photos
Task NA2.2 Strategy, Business Development and Exploitation
- Business Development
- 1st paid FitSM Training course - Terradue
- Organisational aspects of FitSM open registration course - 10 Dec (12 current registrants; 7 paying+5 EGI.eu staff)
- Marketplace
- Post-CF'15 action summary
- Service Management
- SSB Call held - 26 Nov
Task NA2.3 SME/Industry Engagement and Big Data Value Chain
- SME Engagement
- Calls held/planned with: Arctur, Zenotech, Luna Technologies
- Exchange with Jesus for how to formalise relations with 4 SMEs
- Group call at 14:00 (2 Dec)
- Market Analysis
- D2.6 and D2.7 ToC circulated - need reviewers
- Group call at 10:30 (2 Dec)
WP3 (JRA1) (Diego Scardaci)
Milestones & Deliverables
- Next Deliverables at PM12
Task JRA1.1 Authentication and Authorisation Infrastructure
- By mid December GRNET will provide the IdP/SP proxy component
- First SP using this proxy will be the GOCDB testing instance
- EGI SSO and GRNET guest IdPs to be integrated in the pilot
Task JRA1.2 Service Registry and Marketplace
- Assessment of technologies to implement the marketplace still on-going (waiting for the first draft doc from Dean)
Task JRA1.3 Accounting
- Survey to get requirements for Data Accounting
- Work on Storage metrics
- Association tables for cloud
- Accounting Portal: first demo at the EGI CF 2015.
- Demo of the Portal in the ATB
Task JRA1.4 Operations Tools
Ops Portal
GOCDB
- Working v5.5 release, to be deployed 02/12/2015
- Roadmap status: https://wiki.egi.eu/wiki/TASK_JRA1.4_Operations_Tools#GOCDB
Task JRA1.5 Resource Allocation – e-GRANT
- testing phase of a new platform gathering Resource allocation instance and pfu instance: https://e-grant.egi.eu/v2/
WP4 (JRA2) (Matthew Viljoen)
Milestones & Deliverables
Task JRA2.1 Federated Open Data
Detailed specification of Open Data use cases
Task JRA2.2 Federated Cloud
- ooi first relase (https://appdb.egi.eu/store/software/ooi https://launchpad.net/ooi/occi-1.1/0.1)
- Conclusion of deliverables D4.2, D4.3.
- Information System initial discussions.
Task JRA2.3 e-Infrastructures Integration
- Start preparation of deliverables D4.5 and D4.6
- Analysis of requirements for accounting for D4Science
Task JRA2.4 Accelerated Computing
- Accelerated Computing in Grid
- Troubleshooting and fixing some misconfiguration at CIRMMP testbed
- Deployed MoBrain dockerized DisVis application at CIRMMP testbed. MoBrain users (through enmr.eu VO) can now run DisVis exploiting the GPU cluster at CIRMMP via the GPU-enabled CREAM-CE
- Start prepararing the process of certification: investigating the use of IM (UPV tool) for automatically deploying cluster on the EGI FedCloud to be used for the GPU-enabled CREAM-CE certification
- Accelerated Computing in Cloud
- Created new authentication module for logging into Horizon dashboard via keystone token
- Various client tools: getting token, installing nvidia+cuda,
WP5 (SA1) (Peter Solagna)
Milestones & Deliverables
Task SA1.1 Operations Coordination
Task SA1.2 Development of Security Operations
Task SA1.3 Integration, Deployment of Grid and Cloud Platforms
WP6 (SA2) (Gergely Sipos)
Milestones & Deliverables
Task SA2.1 Training
Task SA2.2 Technical User Support
Task SA2.3 ELIXIR
Task SA2.4 BBMRI
Task SA2.5 MoBrain
- A docker image with opencl + nvidia drivers + DisVis application ready to run on GPU servers has been prepared for Ubuntu, with the goals of checking performances described here.
- In collaboration with task JRA2.4 (see above) the DisVis docker image has been ported to the SL6 GPU servers forming the grid-enabled cluster at CIRMMP, and successful grid submissions to this cluster using the enmr.eu VO has been carried out with the expected performances the 1st of December.
- Here follows a description of the scripts and commands used to run the test
$ voms-proxy-init --voms enmr.eu $ glite-ce-job-submit -o jobid.txt -a -r cegpu.cerm.unifi.it:8443/cream-pbs-batch disvis.jdl
where disvis.jdl is:
[ executable = "disvis.sh"; inputSandbox = { "disvis.sh" ,"O14250.pdb" , "Q9UT97.pdb" , "restraints.dat" }; stdoutput = "out.out"; outputsandboxbasedesturi = "gsiftp://localhost"; stderror = "err.err"; outputsandbox = { "out.out" , "err.err" , "res-gpu.tgz"}; GPUNumber=2; ]
and disvis.sh is:
!/bin/sh echo hostname=$(hostname) echo user=$(id) export WDIR=`pwd` echo docker run opencl_disvis... echo starttime=$(date) docker run --device=/dev/nvidia0:/dev/nvidia0 --device=/dev/nvidia1:/dev/nvidia1 \ --device=/dev/nvidiactl:/dev/nvidiactl -v $WDIR:/home opencl_disvis /bin/sh \ -c 'export LD_LIBRARY_PATH=/usr/local/lib64; disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a 5.27 -vs 1 -d /home/res-gpu' echo endtime=$(date) nvidia-smi --query-accounted-apps=pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv tar cfz res-gpu.tgz res-gpu
The example input files specified in the jdl script above were taken from the GitHub repo for disvis available from: https://github.com/haddocking/disvis
The performance on the GPGPU grid resources is what is expected for the card type.
The timings were compared to an in-house GPU node at Utrecht University (GTX680 card):
GPGPU-type Timing[minutes] GTX680 19 2xK20 12 1xK20 12
This indicates that DisVis, as expected, is currently incapable of using both the available GPGPUs. However, we plan to use this testbed to parallelize it on multple GPUs, which should be relatively straightforward.