CVMFS Task Force
|EGI Activity groups||Special Interest groups||Policy groups||Virtual teams||Distributed Competence Centres|
Coordinator: Catalin Condurache/NGI_UK
Meetings page: Agendas
Mailing list: cvmfs-tf (at) mailman.egi.eu
- NGI FI - Ulf Tigerstedt, Luís Alves
- NGI HR - Luko Gjenero, Emir Imamagic
- NGI IBERGRID - Goncalo Borges
- OSG - Scott Teige
- NGI ZA - Bruce Becker
- NGI IT - Paolo Veronesi, Alessandro Constantini
- NGI GRNET - Dimitris Dellis, Kostas Koumantaros
- NGI NL - Dennis van Dok
- NGI ES - Victor Fernandez Albor
- MPI VT - John Walsh
- VLEMED VO - Silvia Olabarriaga, Hurng-Chun Lee, Juan Luis Font
- BIOMED - Johan Montagnat, Franck Michel, Sorina Pop
- WeNMR - Alexandre Bonvin
- AUGER VO - Jiri Chudoba
- glast.org - Michael Kuss, Francesco Longo
- EGI.eu - Tiziana Ferrari, Małgorzata Krakowian, Peter Solagna
CVMFS (CERN Virtual Machine File System) has already proved to be a good solution for the distribution of application software across the resource centres, and several NGIs are offering CVMFS to national VOs.
During the April 2013 OMB in Manchester the idea of offering CVMFS as a service for all EGI VOs was discussed. It is about to have a CVMFS infrastructure that not only can be used within EGI, but also in collaboration with OSG for VOs that access resources hosted by both infrastructures.
|NGI Ibergrid||IBERGRID is already using CVMFS stratum 0 deployed at RAL as a service|
|AAROC||We were planning to deploy a stratum 0 in South Africa for VO sagrid applications, as well as those in use in other sub-Saharan countries.|
|NGI HR||We are running CVMFS for our regional VO.|
|NGI FI||We are one of the non-WLCG organizations running cvmfs.|
|OSG||We recently implemented a CVMFS based service for the OSG. Our intended use case is for our virtual organizations and software group to distribute their content by this mechanism. Our design is quite simple, we allow a very few people write access and everyone read access. We are actively adding resources with access enabled, wide adoption being the goal.|
|NGI IT||We have a very basic deployment for our catch-all VO, but it hasn't been very used until now.|
We run stratum-0 repos (v2.0.15) for 5 VOs (mice, na62, hone, enmr.eu, phys.ibergrid.vo.eu). Our design allows installation jobs (run by VO_SGM) to write in an NFS area and then rsync to the CVMFS stratum-0 areas. For VOs not supported by GridPP UK (in terms of CPU allocations), we manually upload and maintain their CVMFS repos, but we are currently working to a web interface that will allow their own software management.
Also we run a CVMFS stratum-1 service (part of the LHC CVMFS infrastructure) which also includes replicas for the above mentioned VOs.
As plans, we look to migrate the stratum-0 nodes to v2.1.X on SL6. Also we work with CERN to replicate these 5 replicas on their stratum-1 service (3 out of 5 so far replicated). And looking to establish a network of stratum-1 servers to consolidate the non-LHC (or small) VOs cvmfs service infrastructure (tests to start soon with NIKHEF).
- (1) is your community already familiar with CVMFS and is CVMFS already used in some form to support production activities?
- (2) are you interested in trying CVMFS?
|VLEMED VO||(1) used in some form to support production activities?
we were not until this email, but we asked around. The Dutch NGI is taking action already for non HEP users. we look forward to their findings.
|Johan Montagnat||(1) No, we were not aware of this tool before.
(2) There is a clear need for easily deploying software packages grid-wise in the LS community. The existing solution of populating VO software space per site is often considered tedious, and leading to incoherent state when software cannot be installed on some sites due to installation job issues. So an alternative and global scale solution would be welcome. We would need to know a bit more on CVMFS though. From the slides shown, it is not clear how the software registered on the central CVMFS root directory is accessible from the sites / worker nodes in particular. (Is the CVMFS root mounted from all worker nodes? What is the performance impact of accessing to this remote software?).
|Alexandre Bonvin||1. we are making use of it already. Very nice and simple. Only a few sites have that in place at this time|
|Jiri Chudoba - auger||1. The auger VO plans to use CVMFS. We have not yet built our own system of servers and we would like to
try "central" service.
|Michael Kuss - glast.org|| 1. No
Comments to Catalin's questions/remarks in the 3-oct-2013 meeting:
Dimension of software area: 200k files in 15.4GB. These are 4 releases of the level-1 analysis package and 1 of the science analysis sw. 1GB each. The rest are common libraries, calibration db etc. The code is all rhel5 32bit compiled. This may change soon to rhel6 64bit. Hence, for a short transition period the amount may double.
Update frequency: right now 3-4 times a year, this may change to about once a month.
Sites: 8 active EGI sites. We also got enabled at two OSG sites. No sw installed (I was hoping for CVMFS).
Max file size: bummer! We have a few fits files up to 500MB, used to store the space-craft positions for long term simulations. Is 100MB a hard limit?
No overwrite: the analysis packages and libraries are versioned. I'm not sure about the calib db and other auxiliary files (like the huge fits files). What is the reasoning behind this design choice? What is about "delete"?
Initial working plan
It has been proposed following CVMFS-TF kick-off meeting (15 August 2013):
- Collect the expression of interest from VOs.
- Understand sites availability to install clients.
- Understand NGI availability to install regional squid.
- Mirroring between sites hosting stratus0 repositories. Creating a network of stratum1 servers.
Webinar - 5 September 2013
Aimed to provide needed information to NGIs/sites and user communities about the technical details and possible architecture.
The complete abstract for the presentation and registration details are available on INDICO at: https://indico.egi.eu/indico/conferenceDisplay.py?confId=1809
QA webinar chat window: https://wiki.egi.eu/wiki/File:CVMFS_webinar_QA_chat_window.doc
EGI Technical Forum - Madrid, 16 - 20 September 2013
A presentation on "CVMFS for EGI VOs" has been given during the User Community Board session (https://indico.egi.eu/indico/conferenceTimeTable.py?confId=1851#20130917) followed by Q&A and other discussions about creating the EGI CVMFS infrastructure.
Express of interest has been gathered from VOs representatives (biomed, auger, vlemed, glast.org) during the Technical Forum and steps of actions have been agreed.
CHEP 2013 - Amsterdam, 14 - 18 October 2013
A poster on "CernVM-FS – Beyond LHC Computing" has been presented (http://indico.cern.ch/getFile.py/access?contribId=392&sessionId=9&resId=0&materialId=poster&confId=214784) within the "Distributed Processing and Data Handling: Infrastructure, Sites, and Virtualization" track.
Express of interest in using CVMFS as s/w distribution mechanism has been gathered from cernatschool.org and t2k.org representatives.
Further actions have been agreed regarding Stratum-1 cross-replication between RAL Tier-1 and OSG sites.
Operations Management Board - 24 April 2014
A presentation on "CVMFS task force update" has been given during the April OMB meeting (https://indico.egi.eu/indico/materialDisplay.py?contribId=8&materialId=slides&confId=2162)
EGI Community Forum - Helsinki, 19 - 23 May 2014
A workshop "EGI services for global software and common data distribution" (https://indico.egi.eu/indico/sessionDisplay.py?sessionId=37&confId=1994#20140520) took place on Tuesday 20 May. Presentations on various aspects of deploying and using the CernVM-FS and Frontier technologies were delivered.
During the workshop, the necessity of a egi.eu CVMFS domain came into discussion. It was agreed to be hosted at RAL, but other stratum-0 sites are welcome to host *.egi.eu repositories (technical details TBC)
Also a hackathon on "Getting Started with the CernVM FileSystem or the Frontier Distributed Database Caching System" (https://indico.egi.eu/indico/contributionDisplay.py?sessionId=37&contribId=55&confId=1994) followed the workshop. Users were assisted by specialists with trying out the CernVM FileSystem and/or the Frontier Distributed Database Caching System with their own applications. Also specific site CVMFS problems were discussed and suggestions for fixing them given.
The poster "Software compatibility check framework for grid computing elements" (https://indico.egi.eu/indico/contributionDisplay.py?contribId=16&confId=2016) presented how usage of CernVM-FS overcame problems associated to the software distribution on the grid.
Following discussions with maintainers of EGI AppDB, it was agreed to create a CVMFS repository that will contain bits of software currently hosted by AppDB. It will be located under the new 'egi.eu' CVMFS domain.
New CVMFS 'egi.eu' domain active - September 2014
Work has been carried out and finalised on configuration of the new 'egi.eu' CVMFS domain. It is going to replace the 'gridpp.ac.uk' domain which accommodates the EGI VO repositories. All existing repositories (as /cvmfs/<repo_name>.gridpp.ac.uk) located at Stratum-0 (RAL) have been replicated as /cvmfs/<repo_name>.egi.eu and both domains are being replicated by Stratum-1s at RAL, NIKHEF, ASGC. Also TRIUMF is replicating the '*.egi.eu' repositories.
Few sites across the Grid have been contacted and agreed to manually configure the new domain, and tests proved successfully.
New cvmfs-keys v1.5 package available - 1 November 2014
A new cvmfs-keys v1.5-1 package has been made available (http://cernvm.cern.ch/portal/filesystem/cvmfs-keys-1.5). It mainly adds the public keys and Stratum-1 server addresses for the egi.eu and opensciencegrid.org CVMFS domains and its roll out will be of significant importance at sites supporting EGI VOs as it automatically configures the new 'egi.eu' domain. Therefore system administrators at sites are encouraged to install the package (https://cvmrepo.web.cern.ch/cvmrepo/yum/cvmfs/EL/5/x86_64/cvmfs-keys-1.5-1.noarch.rpm).
mice, na62, hone, phys.vo.ibergrid.eu, enmr.eu, glast.org, hyperk.org, t2k.org, cernatschool.org, biomed, snoplus.snolab.ca, auger, km3net.org, pheno - yes on stratum-0 v2.1 (on both gridpp.ac.uk and egi.eu domains)
comet.j-parc.jp - in progress
|repositories replicated by dedicated EGI Stratum-1
vlemed, *.desy.de and oasis.opensciencegrid.org repos replicated by EGI Stratum-1
|Tier-2s UK||as requested (mice, na62, hyperk.org, t2k.ork)|
|OSG||oasis.opensciencegrid.org||replicates the non-LHC (gridpp.ac.uk) repositories from RAL-LCG2||as requested, enmr, auger, geant4|
|CERN||replicates mice, na62, hone, phys-ibergrid and wenmr (gridpp.ac.uk) repos from RAL-LCG2|
|NIKHEF||vlemed on stratum-0 v2.1||replicates all *.gridpp.ac.uk and *.egi.eu repos from RAL-LCG2|
|DESY||ilc, calice, hermes, hone, olympus, xfel, zeus on stratum-0 v2.1|
|ASGC||replicates all *.gridpp.ac.uk and *.egi.eu repos from RAL-LCG2|
|TRIUMF||replicates the entire 'egi.eu' domain from RAL-LCG2|
For the 'egi.eu' repositories (auger, biomed, cernatschool, glast, hyperk, km3net, mice, na62, pheno, phys-ibergrid, snoplus, t2k, wenmr) please install the latest cvmfs-keys v1.5-1 RPM (https://cvmrepo.web.cern.ch/cvmrepo/yum/cvmfs/EL/5/x86_64/cvmfs-keys-1.5-1.noarch.rpm). It practically automatically configures the 'egi.eu' domain by adding the public keys and Stratum-1 server addresses. It does the same things for the 'opensciencegrid.org' domain as well.
For other repositories see below.
CVMFS home page http://cernvm.cern.ch/portal/filesystem
CVMFS - Beyond LHC Computing https://indico.egi.eu/indico/getFile.py/access?contribId=7&resId=0&materialId=slides&confId=1235
RAL Tier1 CVMFS https://www.gridpp.ac.uk/wiki/RAL_Tier1_CVMFS
CVMFS for non LHC VOs https://www.gridpp.ac.uk/wiki/RALnonLHCCVMFS