Alert.png The wiki is deprecated and due to be decommissioned by the end of September 2022.
The content is being migrated to other supports, new updates will be ignored and lost.
If needed you can get in touch with EGI SDIS team using operations @ egi.eu.

FedCloudWeNMR

From EGIWiki
Revision as of 17:06, 18 December 2014 by Scardaci (talk | contribs)
Jump to navigation Jump to search

General Information

  • Status: closed
  • Start Date: 01/01/13
  • End Date: 30/04/14
  • EGI.eu contact: Gergely Sipos / gergely.sipos@egi.eu
  • External contact: Marco Verlato / Marco.Verlato@pd.infn.it


Short Description

The objective of WeNMR is to optimize and extend the use of the NMR and SAXS research infrastructures through the implementation of an e-infrastructure in order to provide the user community with a platform integrating and streamlining the computational approaches necessary for NMR and SAXS data analysis and structural modelling. Access to the e-NMR infrastructure is provided through a portal integrating commonly used software and GRID technology.


Tasks

<--

Use case 1 (in progress)

Using VMs prepared with Gromacs and some other software to run MD simulations for educational purpose, possibly on multi-core VMs. These VMs are currently run on the SARA cloud for a computer practical of University courses, see here.
A more ambitious use case, always related to Gromacs, would be the submission over the cloud of the WeNMR/Gromacs jobs currently done over the grid as described here.
The cloud version of WeNMR/Gromcas would have two main advantages:

  • the possibility to run on boxes of more than 6 cpu-cores, if the size of the VMs made available by the cloud provider allows it;
  • to avoid to hit the queue limits on cpu time that are reached very quickly on typical grid sites.

Of course that would require a not negligible effort in order to adapt the WeNMR/Gromacs portal to submit jobs to the cloud, for example a possible option could be to go through a JSAGA-OCCI adaptor as described here.

Use case 2 (not more supported)

Validating and improving biomolecular NMR structures using VirtualCing (VCing), a Virtual Machine (VM) equipped with a complex suite of ~25 programs.
A presentation of the current deployment at the Dutch National HPC Cloud is available here, and recently a paper has been published here.
The cloud usage framework is based on a pilot job mechanism making use of the ToPoS tool. Therefore, such a framework would naturally allow for execution of VCing tasks across multiple cloud providers. Do notice that the framework is independent on the cloud access interface: it would work also with simple grid jobs, as far as the user-defined (or VO manager defined) VCing VM is available at the grid site e.g. in a SE (or in the VO software area mounted by the WNs) and the grid job is allowed to start the VM. Technical details about its current implementation are available here.
A live demonstration about the deployment and use of VCing on the WNoDeS testbed of the INFN-CNAF computing centre has been shown at the EGI TF 2012 held in September. Further demostrations of VCing over the EGI Federated Cloud testbed were shown at the Cloudscape V Workshop held in February 2013 and at the EGI CF 2013 held in April.
VCing OS is Ubuntu 11.04 i386, and its image size has been recently shrinked to 8.1 GB in compressed qcow2 kvm format (from a 20 GB raw image). This demo version of the VCing image is in the EGI marketplace and is freely available for demonstration shows.

1. Here below are the steps carried out to make the VCing demo image work with:
- the WNoDeS framework for the network and debian/ubuntu:

  • modifying the file /etc/network/interfaces as follows:
auto lo
iface lo inet loopback
auto eth0
allow-hotplug eth0
iface eth0 inet dhcp
auto eth1
allow-hotplug eth1
iface eth1 inet dhcp
  • deleting /etc/udev/rules.d/70-persistent-net.rules file
  • removing the hostaname in the file /etc/hostname

- the CESNET framework based on XEN:

  • copy the files provided by CESNET to:
/etc/grub.d/10_linux   # removed Grub sub-menus breaking pygrub
/etc/init/hvc0.conf      # added a new console, the file didn't exist before
/usr/sbin/update-grub # replacing KVM-specific disk names with universal hd0
  • then run the command update-grub

Other Cloud frameworks of course can have their own contextualisation recipes.
2. From any machine having the Topos library add the tokens to the Topos server VCing demo area https://topos.grid.sara.nl/4.1/pools/vCingDemo

$ ./topos createTokensFromLinesInFile vCingDemo demo_token_list.txt

For testing purpose you can also try with one token by adding manually from the Topos web interface e.g. the following line:

refineEntryNrg 1iy6 9 http://nmr.cmbi.ru.nl/NRG-CING/data i@nmr.cmbi.ru.nl:/mnt/data/D/NMR_REDO . . BY_CH23_BY_ENTRY CING 0 auto 0 0 1

3. You can start now as many as possible VCing instances on the federated cloud. Job payloads will be retrieved automatically by the VCing from the Topos server pool until all the tokens are processed. For demo show purposes you can check live the token list here and see that they get first locked and then disappear when processed. A graphical example of the final result can be shown by clicking here.
4. Output data are copied at the end of the processing from VCing to the NMR server nmr.cmbi.ru.nl hosted at CMBI:

  • ssh traffic must be allowed from the VCing instance to the NMR server
  • the cloud subnet address must be included in the /etc/firewall.conf of the NMR server
  • a post-processing script does untar the zipped data output and makes it accessible through web e.g. from here
  • in the last VCing image demo version (14 Feb 2013) the post-processing at the NMR server is issued remotely from the VM right after the output data copy, so no more need to log into the NMR server to perform post-processing as done in the EGI TF 2012 demo.

In September 2013 a collaboration with the DIRAC project has started aiming at using the VMDIRAC portal as VM scheduler.
The DIRAC broker is not involved in this use case because the job payload for the VCing VM is provided by the ToPoS server as described above. The DIRAC team used ssh contextualization to run in the VMs the VM Monitor Agent, that is in charge of the VM status update for the VM management in the DIRAC portal, and at the same time is checking CPU activity in the VM, if no activity in a certain window time, then it is stopping the VM automatically.
The goal of this effort is twofold:

  • WeNMR can take advantage of the VM machinery in DIRAC and the web monitoring and management
  • VMDIRAC can be prooved as a toool for VM scheduling without the use of DIRAC broker

--!>