Alert.png The wiki is deprecated and due to be decommissioned by the end of September 2022.
The content is being migrated to other supports, new updates will be ignored and lost.
If needed you can get in touch with EGI SDIS team using operations @


From EGIWiki
Revision as of 15:08, 29 March 2019 by Apaolini (talk | contribs) (Created page with "{{Template:Op menubar}} {{Template:Doc_menubar}} {{TOC_right}} Category:Grid Operations Meetings = General information = * EGI Conference 2019")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
Main operations services Support Documentation Tools Activities Performance Technology Catch-all Services Resource Allocation Security

Documentation menu: Home Manuals Procedures Training Other Contact For: VO managers Administrators

General information


UMD 4.8.2 has been released today:

It includes the following[*] updates for CentOS7 and SL6. Please let us know about comments and corrections.


  • StoRM 1.11.14 (*SL6 only*) - bug fixes
  • gfal2 2.16.1 - bug fixes
  • gfal2-python 1.9.5 - bug fixes, added gfal2-python3
  • davix 0.7.1 - bug fixes
  • CGSI-gSOAP 1.3.11 - fix on SRM timeout causing transfer failures on CC7
  • XRootD 4.8.5 - several major and minor fixes
  • yaim-core 5.1.7 - fix for the warning "join" command on CentOS 6/7
  • ARC 15.03.19 - bugfix release, mainly to address how ARC counts HELD jobs in Condor
  • GridSite 2.3.5 - bugfix release addressing formatting and warning fixes. RPM packaging were updated to provide CGI scripts at the same location as native Fedora and EPEL packaging. Project URL has been updated

CREAM End Of Support Notice

Preview repository

  • released on 2019-02-15
    • Preview 1.21.0 AppDB info (sl6): APEL Client/Server 1.8.0, davix 0.7.1, dCache 3.2.46, DMLite 1.11.0, Dynafed 1.4.0, gfal2 2.16.1 and gfal2-python 1.9.5, xrootd 4.8.5
    • Preview 2.21.0 AppDB info (CentOS 7): APEL Client/Server 1.8.0, davix 0.7.1, dCache 3.2.46, DMLite 1.11.0, Dynafed 1.4.0, gfal2 2.16.1 and gfal2-python 1.9.5, xrootd 4.8.5




Feedback from DMSU

Notifications from ARGO about the nagios probes failures

In the process of implementing the notification system in ARGO, it was introduced the following changes in GOC-DB:

  • Notifications flag at the site level
  • Notifications flag at the service endpoint level

In this way ARGO will retrieve from GOC-DB the information about the sites and services whom sending the email notification and the related recipients.

The logic of the notifications is the following:

Site Service Notify?

If there isn't any email contact defined at the service endpoint level, it will be used the site contact. Please review your contacts.

You can enable the notifications.

NOTE: It is not mandatory for the sites

Monthly Availability/Reliability

suspended sites: LRZ (NGI_DE)

IPv6 readiness plans

LCGDM end of support and migration to / enabling of DOME

  • on Jun 1st 2019 the support to DPM Legacy (LCGDM) is going to end
  • since DPM 1.10.3 release, it is possible enabling the non-legacy mode DOME (Disk operations Management Engine, see documentation)
    • Legacy mode is when DMLite loads the Adapter+Memcache+MySQL plugins
      • The good old DPM daemon does the coordination work
      • Every process loading dmlite (httpd*4, gridftp, xrootd) needs a new pool of MySQL connections
    • Non-legacy mode is when DMLite loads DOMEAdapter
      • DOME does the coordination work and talks to mysqld
      • DOME does disk server status detection (up/down/space)
      • The DPM daemon coordinates only itself and SRM
      • Only one internal MySQL pool is used for dmlite
    • In non-legacy mode DPM now loads only DMLite::DOMEAdapter
      • dmlite-memcache, dmlite-mysql are no longer necessary
      • Resource consumption (FDs, mysql, etc.) is reduced by an order of magnitude, and so complexity and cost for us all
  • Deployment statistics: 97 sites (112 servers)
$ ldapsearch -x -LLL -H ldap:// -b "Mds-Vo-Name=local,o=grid" '(&(objectClass=GlueSE)(GlueSEImplementationName=DPM))' GlueSEImplementationVersion | grep -i ^glue | sort | uniq -c
     44 GlueSEImplementationVersion: 1.10.0
      1 GlueSEImplementationVersion: 1.12.0
     20 GlueSEImplementationVersion: 1.8.10
      4 GlueSEImplementationVersion: 1.8.11
      2 GlueSEImplementationVersion: 1.8.7
      4 GlueSEImplementationVersion: 1.8.8
      8 GlueSEImplementationVersion: 1.8.9
     29 GlueSEImplementationVersion: 1.9.0
  • reported some issues with DOME and gridftp redirection that have been fixed in v1.12.0 (not yet in EPEL)
  • wait for DPM 1.12.0 before enabling DOME
  • all the sites with older DPM versions (1.8 and 1.9) are suggested to upgrade to the latest DPM version , following the guide DPM upgrade (chapter 1 Upgrade to DPM 1.10.0 "Legacy Flavour")
    • convenient to address well in advance any problem arising with the upgrade from older versions
  • later on, after v1.12.0 is relased, some other WLCG sites will test it; if no other issues are found, all the EGI sites can upgrade:
    • follow chapter 2 (Upgrade to DPM 1.10.0 "Dome Flavour") of the guide
      • DOME and the old LCGDM (srm protocol) will coexist

HTCondorCE integration

Link to procedure:

GGUS ticket:

Steps status:

  • (In progress) Underpinning Agreement:
    • Document sent to the product team
  • (Completed) Configuration management:
    • Service type already present: org.opensciencegrid.htcondorce
  • (To do) OPS Dashboard:
    • To do when Conf Mgmt and Monitoring are completed.
  • (in progress) Information System: the info-provider is already available:
  • (in progress) Monitoring: some probes are already available
    • To check what kind of tests
    • To make a package compatible with ARGO
  • (To do) Support: to create the support unit
  • (in progress) Accounting:
  • (in progress) Documentation
  • (In progress) Security
    • Condor team filled in the questionnaire and sent it to Linda
  • UMD
    • suggested to act in advance because the functional tests could take a lot of time
    • Ticket for the inclusion:
      • HTCondor-CE is released with HTCondor in a version without the dependencies on other OSG software. Moreover, HTCondor-CE depends on HTCondor (though not for use as a batch scheduler), so both packages will need to be distributed.
    • Asked for any automated deployment (ansible, puppets or other) configuration files supported by HTcondor team
      • It is available a copy of the ansible files that OSG uses to configure HTCondor-CE on their internal testbed machines. They can provide help for a non-OSG version of the files
    • Provided information for the UMD card, which needs to be created

webdav probes in OPERATORS profile

The webdav probes was included in the ARGO_MON_OPERATORS profile after the approval in the January 2018 OMB: in this way the failures will generate an alarm on the dashboard, and the ROD teams can open a ticket to the failing sites. If no particular issue occurs, and if at least 75% of webdav endpoint are passing the tests, the probes will be added in the ARGO_MON_CRITICAL profile, so the results of these probes will be taken into account for the A/R figures.

A bug in XrooD has to be fixed before moving the webdav probes in the critical profile

List of sites that not have completed the configuration yet:

Still some issues with:

Storage accounting deployment

During the September 2017 meeting, OMB has approved the full-scale deployment of storage accounting. The APEL team tested it with a group of early adopters sites, and the results prove that storage accounting is now production-ready.

Storage accounting is currently supported only for the DPM and dCache storage elements therefore only the resource centres deploying these kind of storage elements are requested to publish storage accounting data.

In order to properly install and configure the storage accounting scripts, please follow the instructions reported in the wiki:

IMPORTANT: be sure to have installed the script v1.0.4 (

After setting up a daily cron job and running the accounting software, look for your data in the Accounting Portal: If it does not appear within 24 hours, or there are other errors, please open a GGUS ticket to APEL who will help debug the process.

Test portal:

IMPORTANT: Do not encrypt the storage records with your host certificate, please comment out the “server_cert” variable in sender.cfg

List of sites already publishing and of tickets opened is reported here.

Several sites are not publishing the storage accounting data yet. NGIs please follow-up with the sites the configuration of the script in order to speed-up the process.


Next meeting