General information
Middleware
UMD/CMD
- UMD4 regular release (4.7.0) planned for April 2018; dedicated updates are always possible
- UMD3 deprecation
- WMS dismission plan presented at OMB
- in parallel, UMD team will test upgrading the umd-release package from UMD3/SL6 to UMD4/SL6 to make usre everything works properly
- plan will be arranged and agreed with PTs in January/February
- at some point UMD3 will be "freezed" (no more updates of any kind, either security ones)
- OMB suggested to remove it completely so that it's not used anymore
- probably we will establish a period of 2-4 weeks during which sites get progressively aware that the old repos won't work anymore and switch to UMD4/SL6
- if any security issue comes out during that period, we will ask to shut down the repository
- CMD-OS update still in preparation: found SR for cloudkeeper, but unable to get packages for Ubuntu; no way to include yet user id isolatin patch for Mitaka/Ubuntu
Preview repository
released on 2018-01-23
- Preview 1.16.0 AppDB info (sl6): APEL-SSM 2.2.0, ARC 15.03 update 18, CVMFS 2.4.4, davix 0.6.7, dCache 2.16.58 and dcap 2.47.12, DPM 1.9.2, XRootD 4.8.0
- Preview 1.16.1 (sl6): it simply fixes a problem in the apel-ssm file released with the previous update
- Preview 2.16.0 AppDB info (CentOS 7): APEL-SSM 2.2.0, ARC 15.03 update 18, CVMFS 2.4.4, davix 0.6.7, dCache 3.1.27 and dcap 2.47.12, DPM 1.9.2, XRootD 4.8.0
Operations
GGUS Support Unit review
- EGCF 2015 -> contacted email
- EMI WN 2015 -> to be changed together with UI
- EMIR 2013 -> contacted email
- OpenNebula 2015 -> to be reviewed it in the context of redefinition of the fedcloud related SU
- rOCCI 2015 -> to be reviewed it in the context of redefinition of the fedcloud related SU
- UNICORE-Client 2013 -> can be closed
ARGO/SAM
FedCloud
- hardening FedCloud Appliances in App-DB (in progress)
- cloudkeeper for OpenStack (the vmcatcher replacement) cannot yet be distributed for Mitaka/Ubuntu through CMD (missing packages), proposed to FedCloud TF an installation campaign to bypass (this time) the UMD process
Feedback from Helpdesk
Monthly Availability/Reliability
- Underperformed sites in the past A/R reports with issues not yet fixed:
- AfricaArabia: https://ggus.eu/index.php?mode=ticket_info&ticket_id=132807 (DZ-01-ARN): SRM and site-bdii problems solved, accounting data are sent to the EGI central queue and published again
- NOTE: all AfricaArabia sites need to change the messaging queue to /queue/global.accounting.cpu.central and most likely republish data from August 2017 onwards
- AsiaPacific: https://ggus.eu/index.php?mode=ticket_info&ticket_id=131661
- IN-DAE-VECC-02 (OK), PK-CIIT (site-BDII problems)
- https://ggus.eu/index.php?mode=ticket_info&ticket_id=132808 (TW-NTU-HEP)
- NGI_AEGIS: https://ggus.eu/index.php?mode=ticket_info&ticket_id=132809 (AEGIS01-IPB-SCL)
- NGI_PL: https://ggus.eu/index.php?mode=ticket_info&ticket_id=132815 PSNC (SRM issues, improving)
- AfricaArabia: https://ggus.eu/index.php?mode=ticket_info&ticket_id=132807 (DZ-01-ARN): SRM and site-bdii problems solved, accounting data are sent to the EGI central queue and published again
- Underperformed sites after 3 consecutive months, underperformed NGIs, QoS violations:
- AfricaArabia ZA-WITS-CORE https://ggus.eu/index.php?mode=ticket_info&ticket_id=133288 (move SRM to CentOS7)
- AsiaPacific TW-NCHC https://ggus.eu/index.php?mode=ticket_info&ticket_id=133282 (DPM problems)
- NGI_IL: https://ggus.eu/index.php?mode=ticket_info&ticket_id=133284 (IL-TAU-HEP, TECHNION-HEP) intermittent failures, asked help to ARGO
- NGI_IT https://ggus.eu/index.php?mode=ticket_info&ticket_id=133285 (IGI-BOLOGNA INFN-BOLOGNA-T3 INFN-CNAF-LHCB INFN-T1) the sites are slowly coming back online
suspended sites:
Compute and storage resources to be published in the BDII - GLUE2
It is important that the sites publish their resources in the BDII (GLUE2 Schema) for keeping track of the capacity of our infrastructure and how it evolves over the time. The NGIs are asked to follow-up with their sites if the information are properly published:
- several SRM servers (dCache and DPM) and ARC-CEs are missing in GLUE2 Schema
- some benchmark values are clearly wrong
In particular we need to know:
- number of cores and benchmark (Manual for Hepspec06 benchmark)
- amount of storage (disk and tape) available
You can easily find on VAPOR what is the capacity of your NGIs and what are the missing resources:
- "figures" section: http://operations-portal.egi.eu/vapor/resources/GL2ResSummary
- Example of ldap query for checking if a site is publishing the HepSpec-06 benchmark:
$ ldapsearch -x -LLL -H ldap://egee-bdii.cnaf.infn.it:2170 -b "GLUE2DomainID=pic,GLUE2GroupID=grid,o=glue" '(&(objectClass=GLUE2Benchmark)(GLUE2BenchmarkType=hep-spec06))' dn: GLUE2BenchmarkID=ce07.pic.es_hep-spec06,GLUE2ResourceID=ce07.pic.es,GLUE2ServiceID=ce07.pic.es_ComputingElement,GLUE2GroupID=resource,GLUE2DomainID=pic,GLUE2GroupID=grid,o=glue GLUE2BenchmarkExecutionEnvironmentForeignKey: ce07.pic.es GLUE2BenchmarkID: ce07.pic.es_hep-spec06 GLUE2BenchmarkType: hep-spec06 objectClass: GLUE2Entity objectClass: GLUE2Benchmark GLUE2BenchmarkValue: 12.1205 GLUE2EntityOtherInfo: InfoProviderName=glite-ce-glue2-benchmark-static GLUE2EntityOtherInfo: InfoProviderVersion=1.1 GLUE2EntityOtherInfo: InfoProviderHost=ce07.pic.es GLUE2BenchmarkComputingManagerForeignKey: ce07.pic.es_ComputingElement_Manager GLUE2EntityName: Benchmark hep-spec06 GLUE2EntityCreationTime: 2017-06-20T16:50:48Z dn: GLUE2BenchmarkID=ce01.pic.es_hep-spec06,GLUE2ResourceID=ce01.pic.es,GLUE2ServiceID=ce01.pic.es_ComputingElement,GLUE2GroupID=resource,GLUE2DomainID=pic,GLUE2GroupID=grid,o=glue GLUE2BenchmarkExecutionEnvironmentForeignKey: ce01.pic.es GLUE2BenchmarkID: ce01.pic.es_hep-spec06 GLUE2BenchmarkType: hep-spec06 objectClass: GLUE2Entity objectClass: GLUE2Benchmark GLUE2BenchmarkValue: 13.4856 GLUE2EntityOtherInfo: InfoProviderName=glite-ce-glue2-benchmark-static GLUE2EntityOtherInfo: InfoProviderVersion=1.1 GLUE2EntityOtherInfo: InfoProviderHost=ce01.pic.es GLUE2BenchmarkComputingManagerForeignKey: ce01.pic.es_ComputingElement_Manager GLUE2EntityName: Benchmark hep-spec06 GLUE2EntityCreationTime: 2017-09-05T07:34:26Z
- Example of ldap query for getting the number of LogicalCPUs published by an ARC-CE (due to a bug in te info-provider, CREAM-CE publish the total number under the ExecutionEnvironment class):
$ ldapsearch -x -LLL -H ldap://egee-bdii.cnaf.infn.it:2170 -b "GLUE2DomainID=UA_ILTPE_ARC,GLUE2GroupID=grid,o=glue" 'objectClass=GLUE2ComputingManager' GLUE2ComputingManagerTotalLogicalCPUs dn: GLUE2ManagerID=urn:ogf:ComputingManager:ds4.ilt.kharkov.ua:pbs,GLUE2ServiceID=urn:ogf:ComputingService:ds4.ilt.kharkov.ua:arex,GLUE2GroupID=services,GLUE2DomainID=UA_ILTPE_ARC,GLUE2GroupID=grid,o=glue GLUE2ComputingManagerTotalLogicalCPUs: 168
- Example of ldap query for getting the number of LogicalCPUs published by a CREAM-CE:
$ ldapsearch -x -LLL -H ldap://egee-bdii.cnaf.infn.it:2170 -b "GLUE2DomainID=UKI-SOUTHGRID-SUSX,GLUE2GroupID=grid,o=glue" 'objectClass=GLUE2ExecutionEnvironment' GLUE2ExecutionEnvironmentLogicalCPUs GLUE2ExecutionEnvironmentPhysicalCPUs GLUE2ExecutionEnvironmentTotalInstances dn: GLUE2ResourceID=grid-cream-02.hpc.susx.ac.uk,GLUE2ServiceID=grid-cream-02.hpc.susx.ac.uk_ComputingElement,GLUE2GroupID=resource,GLUE2DomainID=UKI-SOUTHGRID-SUSX,GLUE2GroupID=grid,o=glue GLUE2ExecutionEnvironmentTotalInstances: 71 GLUE2ExecutionEnvironmentLogicalCPUs: 568 GLUE2ExecutionEnvironmentPhysicalCPUs: 71
- Example of ldap query for getting the amount of storage:
$ ldapsearch -x -LLL -H ldap://egee-bdii.cnaf.infn.it:2170 -b "GLUE2DomainID=UKI-LT2-Brunel,GLUE2GroupID=grid,o=glue" 'objectClass=GLUE2StorageServiceCapacity' dn: GLUE2StorageServiceCapacityID=dgc-grid-38.brunel.ac.uk/capacity,GLUE2ServiceID=dgc-grid-38.brunel.ac.uk,GLUE2GroupID=resource,GLUE2DomainID=UKI-LT2-Brunel,GLUE2GroupID=grid,o=glue GLUE2StorageServiceCapacityUsedSize: 18808 objectClass: GLUE2StorageServiceCapacity GLUE2StorageServiceCapacityFreeSize: 2020 GLUE2StorageServiceCapacityType: online GLUE2StorageServiceCapacityStorageServiceForeignKey: dgc-grid-38.brunel.ac.uk GLUE2StorageServiceCapacityID: dgc-grid-38.brunel.ac.uk/capacity GLUE2StorageServiceCapacityTotalSize: 21997 GLUE2StorageServiceCapacityReservedSize: 1168 GLUE2EntityCreationTime: 2018-02-09T15:17:16Z dn: GLUE2StorageServiceCapacityID=dc2-grid-64.brunel.ac.uk/capacity,GLUE2ServiceID=dc2-grid-64.brunel.ac.uk,GLUE2GroupID=resource,GLUE2DomainID=UKI-LT2-Brunel,GLUE2GroupID=grid,o=glue objectClass: GLUE2StorageServiceCapacity GLUE2StorageServiceCapacityType: online GLUE2StorageServiceCapacityStorageServiceForeignKey: dc2-grid-64.brunel.ac.uk GLUE2StorageServiceCapacityID: dc2-grid-64.brunel.ac.uk/capacity GLUE2StorageServiceCapacityTotalSize: 1416513 GLUE2StorageServiceCapacityReservedSize: 32985 GLUE2EntityCreationTime: 2018-02-09T15:18:55Z GLUE2StorageServiceCapacityUsedSize: 1310046 GLUE2StorageServiceCapacityFreeSize: 73482
Decommissioning EMI WMS
WMS servers can be decommissioned. Please follow the procedure PROC12. The plan is:
- Starting from January 2018, put the WMS servers in draining: this will block the submission of new jobs and will allow the jobs previously submitted to finish
- inform in advance your users that you are going to put in draining and then dismiss the WMS servers (as per PROC12)
- there might be several VOs enabled on your WMS servers: in case only few of them need to use the service for few weeks more, you might disable the other VOs
- On Dec 14th EGI Operations sent a new broadcast to the VOs reminding the users the forthcoming WMS decommission
- After the end of February, EGI Operations will open a ticket to the sites that haven't started the decommission process yet
WMS servers in downtime on GOC-DB
VOs have to find alternatives or migrate to DIRAC:
- the HOWTO22 explains how a VO can request the access to DIRAC4EGI and how interact with it by CLI
WMS decommissioning status
42 WMS still production and monitored
Last update | NGI/ROC | STATUS | Comments |
2018-02-12 | NGI_FRANCE | DONE | All WMS decommissioned |
IPv6 readiness plans
- assessment ongoing https://wiki.egi.eu/w/index.php?title=IPV6_Assessment
- still missing NGIs/ROCs
- added column in FedCloud wiki to monitor IPv6 readiness of cloud sites https://wiki.egi.eu/wiki/Federated_Cloud_infrastructure_status#Status_of_the_Federated_Cloud
webdav probes in OPERATORS profile
The webdav probes was included in the ARGO_MON_OPERATORS profile after the approval in the January OMB: in this way the failures will generate an alarm on the dashboard, and the ROD teams can open a ticket to the failing sites. If no particular issue occurs, and if at least 75% of webdav endpoint are passing the tests, the probes will be added in the ARGO_MON_CRITICAL profile, so the results of these probes will be taken into account for the A/R figures.
- webdav endpoints registered in GOC-DB: https://goc.egi.eu/gocdbpi/public/?method=get_service&&service_type=webdav
- link to nagios results: https://argo-mon.egi.eu/nagios/cgi-bin/status.cgi?servicegroup=SERVICE_webdav&style=detail
List of sites that not have completed the configuration yet:
- NGI_AEGIS: AEGIS01-IPB-SCL https://ggus.eu/index.php?mode=ticket_info&ticket_id=131033 (hardware problems...)
- NGI_HR: egee.srce.hr https://ggus.eu/index.php?mode=ticket_info&ticket_id=131041 (in progress...)
List of sites that disabled webdav: UNIGE-DPNC, GR-01-AUTH, HG-03-AUTH, CETA-GRID, WUT
For registering on GOC-DB the webdav service endpoint, follow the HOWTO21 in order to filling in the proper information. In particular:
- register a new service endpoint, separated from the SRM one;
- on GOC-DB fill in the webdav URL containing also the VO ops folder, for example: https://darkstorm.cnaf.infn.it:8443/webdav/ops or https://hepgrid11.ph.liv.ac.uk/dpm/ph.liv.ac.uk/home/ops/
- it corresponds to the value of GLUE2 attribute GLUE2EndpointURL (containing the used port and without the VO folder);
- verify that the webdav url (for example: https://darkstorm.cnaf.infn.it:8443/webdav ) is properly accessible.
Storage accounting deployment
During the September meeting, OMB has approved the full-scale deployment of storage accounting. The APEL team has tested it with a group of early adopters sites, and the results prove that storage accounting is now production-ready.
Storage accounting is currently supported only for the DPM and dCache storage elements therefore only the resource centres deploying these kind of storage elements are requested to publish storage accounting data.
In order to properly install and configure the storage accounting scripts, please follow the instructions reported in the wiki: https://wiki.egi.eu/wiki/APEL/Storage
IMPORTANT: be sure to have installed the star-accounting.py script v1.0.4 (http://svnweb.cern.ch/world/wsvn/lcgdm/lcg-dm/trunk/scripts/StAR-accounting/star-accounting.py)
After setting up a daily cron job and running the accounting software, look for your data in the Accounting Portal: http://goc-accounting.grid-support.ac.uk/storagetest/storagesitesystems.html. If it does not appear within 24 hours, or there are other errors, please open a GGUS ticket to APEL who will help debug the process.
PROBLEM: several (DPM) sites are using an old version of the star-accounting.py script. This leads to records having an EndTime 30 days in the future. The star-accounting.py script version to use is v1.0.4 (http://svnweb.cern.ch/world/wsvn/lcgdm/lcg-dm/trunk/scripts/StAR-accounting/star-accounting.py).
The APEL team opened tickets for this issue:
- AEGIS02-RCUB: https://ggus.eu/index.php?mode=ticket_info&ticket_id=131892 (SOLVED)
- AEGIS03-ELEF-LEDA: https://ggus.eu/index.php?mode=ticket_info&ticket_id=131893 (SOLVED)
- AUVERGRID: https://ggus.eu/index.php?mode=ticket_info&ticket_id=131894 (SOLVED)
- CAMK: https://ggus.eu/index.php?mode=ticket_info&ticket_id=131895 (SOLVED)
- CETA-GRID: https://ggus.eu/index.php?mode=ticket_info&ticket_id=131896 (SOLVED)
- GARR-01-DIR: https://ggus.eu/index.php?mode=ticket_info&ticket_id=131897 (SOLVED)
- IN2P3-LPC: https://ggus.eu/index.php?mode=ticket_info&ticket_id=131917 (SOLVED)
- RO-02-NIPNE: https://ggus.eu/index.php?mode=ticket_info&ticket_id=131918 (SOLVED)
- RO-07-NIPNE: https://ggus.eu/index.php?mode=ticket_info&ticket_id=131920 (SOLVED)
- TOKYO-LCG2: https://ggus.eu/index.php?mode=ticket_info&ticket_id=131921 (SOLVED)
- TW-NTU-HEP: https://ggus.eu/index.php?mode=ticket_info&ticket_id=131923 (SOLVED)
- UA-ISMA: https://ggus.eu/index.php?mode=ticket_info&ticket_id=131925
- UKI-NORTHGRID-SHEF-HEP: https://ggus.eu/index.php?mode=ticket_info&ticket_id=131926 (SOLVED)
- TASK: https://ggus.eu/index.php?mode=ticket_info&ticket_id=131928 (SOLVED)
PROBLEM number 2: the APEL repository is receiving an increasing number of storage records that have been encrypted with something that isn’t the APEL certificate, so the records can’t be read them (and so the sender is unknown). If your site isn’t successfully publishing, please comment out the “server_cert” variable in sender.cfg
List of sites already publishing and of tickets opened is reported here. Several sites are not publishing the storage accounting data yet. NGIs please follow-up with the sites the configuration of the script in order to speed-up the process.
AOB
- NGI_FRANCE reorganizing the national operations following a distributed model (not anymore operated by CC-IN2P3); are other NGIs reorganising as well? how? do you have suggestions?
- do you have suggestions to improve the EGI Operations meeting itself or to improve the distribution of the items discussed/reported between EGI Operations and OMB?
Next meeting
- Mar 12th, 2018 https://indico.egi.eu/indico/event/3639/