CVMFS Task Force

From EGIWiki
Jump to: navigation, search
EGI Activity groups Special Interest groups Policy groups Virtual teams Distributed Competence Centres


Coordinator: Catalin Condurache/NGI_UK

Meetings page: Agendas

Mailing list: cvmfs-tf (at)



CVMFS (CERN Virtual Machine File System) has already proved to be a good solution for the distribution of application software across the resource centres, and several NGIs are offering CVMFS to national VOs.

During the April 2013 OMB in Manchester the idea of offering CVMFS as a service for all EGI VOs was discussed. It is about to have a CVMFS infrastructure that not only can be used within EGI, but also in collaboration with OSG for VOs that access resources hosted by both infrastructures.

NGI Status

NGI Status
NGI Ibergrid IBERGRID is already using CVMFS stratum 0 deployed at RAL as a service
AAROC We were planning to deploy a stratum 0 in South Africa for VO sagrid applications, as well as those in use in other sub-Saharan countries.

Update: We have finished the CI platform and are ready to include the Stratum 0 in the distribution. Info at and

NGI HR We are running CVMFS for our regional VO.
NGI FI We are one of the non-WLCG organizations running cvmfs.
OSG We recently implemented a CVMFS based service for the OSG. Our intended use case is for our virtual organizations and software group to distribute their content by this mechanism. Our design is quite simple, we allow a very few people write access and everyone read access. We are actively adding resources with access enabled, wide adoption being the goal.
NGI IT We have a very basic deployment for our catch-all VO, but it hasn't been very used until now.

We run stratum-0 repos (v2.0.15) for 5 VOs (mice, na62, hone,, Our design allows installation jobs (run by VO_SGM) to write in an NFS area and then rsync to the CVMFS stratum-0 areas. For VOs not supported by GridPP UK (in terms of CPU allocations), we manually upload and maintain their CVMFS repos, but we are currently working to a web interface that will allow their own software management.

Also we run a CVMFS stratum-1 service (part of the LHC CVMFS infrastructure) which also includes replicas for the above mentioned VOs.

As plans, we look to migrate the stratum-0 nodes to v2.1.X on SL6. Also we work with CERN to replicate these 5 replicas on their stratum-1 service (3 out of 5 so far replicated). And looking to establish a network of stratum-1 servers to consolidate the non-LHC (or small) VOs cvmfs service infrastructure (tests to start soon with NIKHEF).

VO status

VO Status
VLEMED VO (1) used in some form to support production activities?

we were not until this email, but we asked around. The Dutch NGI is taking action already for non HEP users. we look forward to their findings.

(2) yes.

Johan Montagnat (1) No, we were not aware of this tool before.

(2) There is a clear need for easily deploying software packages grid-wise in the LS community. The existing solution of populating VO software space per site is often considered tedious, and leading to incoherent state when software cannot be installed on some sites due to installation job issues. So an alternative and global scale solution would be welcome. We would need to know a bit more on CVMFS though. From the slides shown, it is not clear how the software registered on the central CVMFS root directory is accessible from the sites / worker nodes in particular. (Is the CVMFS root mounted from all worker nodes? What is the performance impact of accessing to this remote software?).

Alexandre Bonvin 1. we are making use of it already. Very nice and simple. Only a few sites have that in place at this time
Jiri Chudoba - auger 1. The auger VO plans to use CVMFS. We have not yet built our own system of servers and we would like to

try "central" service.

Michael Kuss -  1. No

2. Yes

Comments to Catalin's questions/remarks in the 3-oct-2013 meeting:

Dimension of software area: 200k files in 15.4GB. These are 4 releases of the level-1 analysis package and 1 of the science analysis sw. 1GB each. The rest are common libraries, calibration db etc. The code is all rhel5 32bit compiled. This may change soon to rhel6 64bit. Hence, for a short transition period the amount may double.

Update frequency: right now 3-4 times a year, this may change to about once a month.

Sites: 8 active EGI sites. We also got enabled at two OSG sites. No sw installed (I was hoping for CVMFS).

Max file size: bummer! We have a few fits files up to 500MB, used to store the space-craft positions for long term simulations. Is 100MB a hard limit?

No overwrite: the analysis packages and libraries are versioned. I'm not sure about the calib db and other auxiliary files (like the huge fits files). What is the reasoning behind this design choice? What is about "delete"?




Initial working plan

It has been proposed following CVMFS-TF kick-off meeting (15 August 2013):

- Collect the expression of interest from VOs.

- Understand sites availability to install clients.

- Understand NGI availability to install regional squid.

- Mirroring between sites hosting stratus0 repositories. Creating a network of stratum1 servers.

Webinar - 5 September 2013

Aimed to provide needed information to NGIs/sites and user communities about the technical details and possible architecture.

The complete abstract for the presentation and registration details are available on INDICO at:

QA webinar chat window:

Recording -

EGI Technical Forum - Madrid, 16 - 20 September 2013

A presentation on "CVMFS for EGI VOs" has been given during the User Community Board session ( followed by Q&A and other discussions about creating the EGI CVMFS infrastructure.

Express of interest has been gathered from VOs representatives (biomed, auger, vlemed, during the Technical Forum and steps of actions have been agreed.

CHEP 2013 - Amsterdam, 14 - 18 October 2013

A poster on "CernVM-FS – Beyond LHC Computing" has been presented ( within the "Distributed Processing and Data Handling: Infrastructure, Sites, and Virtualization" track.

Express of interest in using CVMFS as s/w distribution mechanism has been gathered from and representatives.

Further actions have been agreed regarding Stratum-1 cross-replication between RAL Tier-1 and OSG sites.

Operations Management Board - 24 April 2014

A presentation on "CVMFS task force update" has been given during the April OMB meeting (

EGI Community Forum - Helsinki, 19 - 23 May 2014

A workshop "EGI services for global software and common data distribution" ( took place on Tuesday 20 May. Presentations on various aspects of deploying and using the CernVM-FS and Frontier technologies were delivered.

During the workshop, the necessity of a CVMFS domain came into discussion. It was agreed to be hosted at RAL, but other stratum-0 sites are welcome to host * repositories (technical details TBC)

Also a hackathon on "Getting Started with the CernVM FileSystem or the Frontier Distributed Database Caching System" ( followed the workshop. Users were assisted by specialists with trying out the CernVM FileSystem and/or the Frontier Distributed Database Caching System with their own applications. Also specific site CVMFS problems were discussed and suggestions for fixing them given.

The poster "Software compatibility check framework for grid computing elements" ( presented how usage of CernVM-FS overcame problems associated to the software distribution on the grid.

Following discussions with maintainers of EGI AppDB, it was agreed to create a CVMFS repository that will contain bits of software currently hosted by AppDB. It will be located under the new '' CVMFS domain.

New CVMFS '' domain active - September 2014

Work has been carried out and finalised on configuration of the new '' CVMFS domain. It is going to replace the '' domain which accommodates the EGI VO repositories. All existing repositories (as /cvmfs/<repo_name> located at Stratum-0 (RAL) have been replicated as /cvmfs/<repo_name> and both domains are being replicated by Stratum-1s at RAL, NIKHEF, ASGC. Also TRIUMF is replicating the '*' repositories.

Few sites across the Grid have been contacted and agreed to manually configure the new domain, and tests proved successfully.

New cvmfs-keys v1.5 package available - 1 November 2014

A new cvmfs-keys v1.5-1 package has been made available ( It mainly adds the public keys and Stratum-1 server addresses for the and CVMFS domains and its roll out will be of significant importance at sites supporting EGI VOs as it automatically configures the new '' domain. Therefore system administrators at sites are encouraged to install the package (


EGI CVMFS Deployment Status
Site Stratum-0 Stratum-1 Squid Clients

mice, na62, hone,,,,,,, biomed,, auger,, pheno - yes on stratum-0 v2.1 (on both and domains) - in progress

repositories replicated by dedicated EGI Stratum-1

vlemed, * and repos replicated by EGI Stratum-1

yes all VOs
Tier-2s UK as requested (mice, na62,, t2k.ork)
OSG replicates the non-LHC ( repositories from RAL-LCG2 as requested, enmr, auger, geant4
CERN replicates mice, na62, hone, phys-ibergrid and wenmr ( repos from RAL-LCG2
NIKHEF vlemed on stratum-0 v2.1 replicates all * and * repos from RAL-LCG2
DESY ilc, calice, hermes, hone, olympus, xfel, zeus on stratum-0 v2.1
ASGC replicates all * and * repos from RAL-LCG2
TRIUMF replicates the entire '' domain from RAL-LCG2


For the '' repositories (auger, biomed, cernatschool, glast, hyperk, km3net, mice, na62, pheno, phys-ibergrid, snoplus, t2k, wenmr) please install the latest cvmfs-keys v1.5-1 RPM ( It practically automatically configures the '' domain by adding the public keys and Stratum-1 server addresses. It does the same things for the '' domain as well.

For other repositories see below.

Recommended Configurations at Replicas and Clients Level
VO Variables Values

Useful Links

CVMFS home page

CVMFS - Beyond LHC Computing


CVMFS for non LHC VOs

Personal tools