Alert.png The wiki is deprecated and due to be decommissioned by the end of September 2022.
The content is being migrated to other supports, new updates will be ignored and lost.
If needed you can get in touch with EGI SDIS team using operations @ egi.eu.

Difference between revisions of "Agenda-2018-03-12"

From EGIWiki
Jump to navigation Jump to search
Line 61: Line 61:
***TW-NCHC https://ggus.eu/index.php?mode=ticket_info&ticket_id=133282 (DPM problems solved, statistics are improving)  
***TW-NCHC https://ggus.eu/index.php?mode=ticket_info&ticket_id=133282 (DPM problems solved, statistics are improving)  
**NGI_AEGIS: https://ggus.eu/index.php?mode=ticket_info&ticket_id=132809 (AEGIS01-IPB-SCL)  
**NGI_AEGIS: https://ggus.eu/index.php?mode=ticket_info&ticket_id=132809 (AEGIS01-IPB-SCL)  
**NGI_IL: https://ggus.eu/index.php?mode=ticket_info&ticket_id=133284 (IL-TAU-HEP, TECHNION-HEP) intermittent failures, asked help to ARGO; misconfiguration of the lcg-voms2.cern.ch vomses and lsc files.
**NGI_IT https://ggus.eu/index.php?mode=ticket_info&ticket_id=133285 (IGI-BOLOGNA INFN-BOLOGNA-T3 INFN-CNAF-LHCB INFN-T1) the sites are slowly coming back online
**NGI_PL: https://ggus.eu/index.php?mode=ticket_info&ticket_id=132815 PSNC (intermittent failures with QCG and SRM)
**NGI_PL: https://ggus.eu/index.php?mode=ticket_info&ticket_id=132815 PSNC (intermittent failures with QCG and SRM)


*Underperformed sites after 3 consecutive months, underperformed NGIs, QoS violations:  
*Underperformed sites after 3 consecutive months, underperformed NGIs, QoS violations:  
    
    
**NGI_IL: https://ggus.eu/index.php?mode=ticket_info&ticket_id=133284 (IL-TAU-HEP, TECHNION-HEP) intermittent failures, asked help to ARGO; misconfiguration of the lcg-voms2.cern.ch vomses and lsc files.
**NGI_IT https://ggus.eu/index.php?mode=ticket_info&ticket_id=133285 (IGI-BOLOGNA INFN-BOLOGNA-T3 INFN-CNAF-LHCB INFN-T1) the sites are slowly coming back online


suspended sites:
suspended sites:

Revision as of 17:54, 2 March 2018

Main EGI.eu operations services Support Documentation Tools Activities Performance Technology Catch-all Services Resource Allocation Security


Documentation menu: Home Manuals Procedures Training Other Contact For: VO managers Administrators


AGENDA CONTENT UNDER CONSTRUCTION

General information

Middleware

UMD/CMD

For any kind of operational issues, please contact operations at egi.eu or open a ticket to the Operations SU in GGUS For issues with the distribution, please open a ticket to the Software Provisioning SU in GGUS

  • CMD-OS update still in preparation: found SR for cloudkeeper, but unable to get packages for Ubuntu; no way to include yet user id isolatin patch for Mitaka/Ubuntu

Preview repository

Operations

GGUS Support Unit review

  • EGCF 2015 -> contacted email
  • EMI WN 2015 -> to be changed together with UI
  • EMIR 2013 -> contacted email
  • OpenNebula 2015 -> to be reviewed it in the context of redefinition of the fedcloud related SU
  • rOCCI 2015 -> to be reviewed it in the context of redefinition of the fedcloud related SU
  • UNICORE-Client 2013 -> can be closed

ARGO/SAM


FedCloud

  • hardening FedCloud Appliances in App-DB (in progress)
  • cloudkeeper for OpenStack (the vmcatcher replacement) cannot yet be distributed for Mitaka/Ubuntu through CMD (missing packages), proposed to FedCloud TF an installation campaign to bypass (this time) the UMD process

Feedback from Helpdesk

Monthly Availability/Reliability

  • Underperformed sites after 3 consecutive months, underperformed NGIs, QoS violations:


suspended sites:

  • PK-CIIT and TW-FTT for security reasons

Compute and storage resources to be published in the BDII - GLUE2

It is important that the sites publish their resources in the BDII (GLUE2 Schema) for keeping track of the capacity of our infrastructure and how it evolves over the time. The NGIs are asked to follow-up with their sites if the information are properly published:

  • several SRM servers (dCache and DPM) and ARC-CEs are missing in GLUE2 Schema
  • some benchmark values are clearly wrong

In particular we need to know:

  • number of cores and benchmark (Manual for Hepspec06 benchmark)
  • amount of storage (disk and tape) available

You can easily find on VAPOR what is the capacity of your NGIs and what are the missing resources:


  • Example of ldap query for checking if a site is publishing the HepSpec-06 benchmark:
$ ldapsearch -x -LLL -H ldap://egee-bdii.cnaf.infn.it:2170 -b "GLUE2DomainID=pic,GLUE2GroupID=grid,o=glue" '(&(objectClass=GLUE2Benchmark)(GLUE2BenchmarkType=hep-spec06))'

dn: GLUE2BenchmarkID=ce07.pic.es_hep-spec06,GLUE2ResourceID=ce07.pic.es,GLUE2ServiceID=ce07.pic.es_ComputingElement,GLUE2GroupID=resource,GLUE2DomainID=pic,GLUE2GroupID=grid,o=glue
GLUE2BenchmarkExecutionEnvironmentForeignKey: ce07.pic.es
GLUE2BenchmarkID: ce07.pic.es_hep-spec06
GLUE2BenchmarkType: hep-spec06
objectClass: GLUE2Entity
objectClass: GLUE2Benchmark
GLUE2BenchmarkValue: 12.1205
GLUE2EntityOtherInfo: InfoProviderName=glite-ce-glue2-benchmark-static
GLUE2EntityOtherInfo: InfoProviderVersion=1.1
GLUE2EntityOtherInfo: InfoProviderHost=ce07.pic.es
GLUE2BenchmarkComputingManagerForeignKey: ce07.pic.es_ComputingElement_Manager
GLUE2EntityName: Benchmark hep-spec06
GLUE2EntityCreationTime: 2017-06-20T16:50:48Z

dn: GLUE2BenchmarkID=ce01.pic.es_hep-spec06,GLUE2ResourceID=ce01.pic.es,GLUE2ServiceID=ce01.pic.es_ComputingElement,GLUE2GroupID=resource,GLUE2DomainID=pic,GLUE2GroupID=grid,o=glue
GLUE2BenchmarkExecutionEnvironmentForeignKey: ce01.pic.es
GLUE2BenchmarkID: ce01.pic.es_hep-spec06
GLUE2BenchmarkType: hep-spec06
objectClass: GLUE2Entity
objectClass: GLUE2Benchmark
GLUE2BenchmarkValue: 13.4856
GLUE2EntityOtherInfo: InfoProviderName=glite-ce-glue2-benchmark-static
GLUE2EntityOtherInfo: InfoProviderVersion=1.1
GLUE2EntityOtherInfo: InfoProviderHost=ce01.pic.es
GLUE2BenchmarkComputingManagerForeignKey: ce01.pic.es_ComputingElement_Manager
GLUE2EntityName: Benchmark hep-spec06
GLUE2EntityCreationTime: 2017-09-05T07:34:26Z
  • Example of ldap query for getting the number of LogicalCPUs published by an ARC-CE (due to a bug in te info-provider, CREAM-CE publish the total number under the ExecutionEnvironment class):
$ ldapsearch -x -LLL -H ldap://egee-bdii.cnaf.infn.it:2170 -b "GLUE2DomainID=UA_ILTPE_ARC,GLUE2GroupID=grid,o=glue" 'objectClass=GLUE2ComputingManager' GLUE2ComputingManagerTotalLogicalCPUs

dn: GLUE2ManagerID=urn:ogf:ComputingManager:ds4.ilt.kharkov.ua:pbs,GLUE2ServiceID=urn:ogf:ComputingService:ds4.ilt.kharkov.ua:arex,GLUE2GroupID=services,GLUE2DomainID=UA_ILTPE_ARC,GLUE2GroupID=grid,o=glue
GLUE2ComputingManagerTotalLogicalCPUs: 168
  • Example of ldap query for getting the number of LogicalCPUs published by a CREAM-CE:
$ ldapsearch -x -LLL -H ldap://egee-bdii.cnaf.infn.it:2170 -b "GLUE2DomainID=UKI-SOUTHGRID-SUSX,GLUE2GroupID=grid,o=glue" 'objectClass=GLUE2ExecutionEnvironment' GLUE2ExecutionEnvironmentLogicalCPUs 
GLUE2ExecutionEnvironmentPhysicalCPUs GLUE2ExecutionEnvironmentTotalInstances

dn: GLUE2ResourceID=grid-cream-02.hpc.susx.ac.uk,GLUE2ServiceID=grid-cream-02.hpc.susx.ac.uk_ComputingElement,GLUE2GroupID=resource,GLUE2DomainID=UKI-SOUTHGRID-SUSX,GLUE2GroupID=grid,o=glue
GLUE2ExecutionEnvironmentTotalInstances: 71
GLUE2ExecutionEnvironmentLogicalCPUs: 568
GLUE2ExecutionEnvironmentPhysicalCPUs: 71
  • Example of ldap query for getting the amount of storage:
$ ldapsearch -x -LLL -H ldap://egee-bdii.cnaf.infn.it:2170 -b "GLUE2DomainID=UKI-LT2-Brunel,GLUE2GroupID=grid,o=glue" 'objectClass=GLUE2StorageServiceCapacity'
dn: GLUE2StorageServiceCapacityID=dgc-grid-38.brunel.ac.uk/capacity,GLUE2ServiceID=dgc-grid-38.brunel.ac.uk,GLUE2GroupID=resource,GLUE2DomainID=UKI-LT2-Brunel,GLUE2GroupID=grid,o=glue
GLUE2StorageServiceCapacityUsedSize: 18808
objectClass: GLUE2StorageServiceCapacity
GLUE2StorageServiceCapacityFreeSize: 2020
GLUE2StorageServiceCapacityType: online
GLUE2StorageServiceCapacityStorageServiceForeignKey: dgc-grid-38.brunel.ac.uk
GLUE2StorageServiceCapacityID: dgc-grid-38.brunel.ac.uk/capacity
GLUE2StorageServiceCapacityTotalSize: 21997
GLUE2StorageServiceCapacityReservedSize: 1168
GLUE2EntityCreationTime: 2018-02-09T15:17:16Z

dn: GLUE2StorageServiceCapacityID=dc2-grid-64.brunel.ac.uk/capacity,GLUE2ServiceID=dc2-grid-64.brunel.ac.uk,GLUE2GroupID=resource,GLUE2DomainID=UKI-LT2-Brunel,GLUE2GroupID=grid,o=glue
objectClass: GLUE2StorageServiceCapacity
GLUE2StorageServiceCapacityType: online
GLUE2StorageServiceCapacityStorageServiceForeignKey: dc2-grid-64.brunel.ac.uk
GLUE2StorageServiceCapacityID: dc2-grid-64.brunel.ac.uk/capacity
GLUE2StorageServiceCapacityTotalSize: 1416513
GLUE2StorageServiceCapacityReservedSize: 32985
GLUE2EntityCreationTime: 2018-02-09T15:18:55Z
GLUE2StorageServiceCapacityUsedSize: 1310046
GLUE2StorageServiceCapacityFreeSize: 73482

Decommissioning EMI WMS

WMS servers can be decommissioned. Please follow the procedure PROC12. The plan is:

  • Starting from January 2018, put the WMS servers in draining: this will block the submission of new jobs and will allow the jobs previously submitted to finish
    • inform in advance your users that you are going to put in draining and then dismiss the WMS servers (as per PROC12)
    • there might be several VOs enabled on your WMS servers: in case only few of them need to use the service for few weeks more, you might disable the other VOs
  • On Dec 14th EGI Operations sent a new broadcast to the VOs reminding the users the forthcoming WMS decommission
  • After the end of February, EGI Operations will open a ticket to the sites that haven't started the decommission process yet

WMS servers in downtime on GOC-DB

VOs have to find alternatives or migrate to DIRAC:

  • the HOWTO22 explains how a VO can request the access to DIRAC4EGI and how interact with it by CLI

WMS decommissioning status

42 WMS still production and monitored

WMS decommissioning status
Last update NGI/ROC STATUS Comments
2018-02-12 NGI_FRANCE DONE All WMS decommissioned


IPv6 readiness plans

webdav probes in OPERATORS profile

The webdav probes was included in the ARGO_MON_OPERATORS profile after the approval in the January OMB: in this way the failures will generate an alarm on the dashboard, and the ROD teams can open a ticket to the failing sites. If no particular issue occurs, and if at least 75% of webdav endpoint are passing the tests, the probes will be added in the ARGO_MON_CRITICAL profile, so the results of these probes will be taken into account for the A/R figures.

List of sites that not have completed the configuration yet:

List of sites that disabled webdav: UNIGE-DPNC, GR-01-AUTH, HG-03-AUTH, CETA-GRID, WUT

For registering on GOC-DB the webdav service endpoint, follow the HOWTO21 in order to filling in the proper information. In particular:

Storage accounting deployment

During the September meeting, OMB has approved the full-scale deployment of storage accounting. The APEL team has tested it with a group of early adopters sites, and the results prove that storage accounting is now production-ready.

Storage accounting is currently supported only for the DPM and dCache storage elements therefore only the resource centres deploying these kind of storage elements are requested to publish storage accounting data.

In order to properly install and configure the storage accounting scripts, please follow the instructions reported in the wiki: https://wiki.egi.eu/wiki/APEL/Storage

IMPORTANT: be sure to have installed the star-accounting.py script v1.0.4 (http://svnweb.cern.ch/world/wsvn/lcgdm/lcg-dm/trunk/scripts/StAR-accounting/star-accounting.py)

After setting up a daily cron job and running the accounting software, look for your data in the Accounting Portal: http://goc-accounting.grid-support.ac.uk/storagetest/storagesitesystems.html. If it does not appear within 24 hours, or there are other errors, please open a GGUS ticket to APEL who will help debug the process.

PROBLEM: several (DPM) sites are using an old version of the star-accounting.py script. This leads to records having an EndTime 30 days in the future. The star-accounting.py script version to use is v1.0.4 (http://svnweb.cern.ch/world/wsvn/lcgdm/lcg-dm/trunk/scripts/StAR-accounting/star-accounting.py).

The APEL team opened tickets for this issue:

PROBLEM number 2: the APEL repository is receiving an increasing number of storage records that have been encrypted with something that isn’t the APEL certificate, so the records can’t be read them (and so the sender is unknown). If your site isn’t successfully publishing, please comment out the “server_cert” variable in sender.cfg

List of sites already publishing and of tickets opened is reported here. Several sites are not publishing the storage accounting data yet. NGIs please follow-up with the sites the configuration of the script in order to speed-up the process.

AOB

  • NGI_FRANCE reorganizing the national operations following a distributed model (not anymore operated by CC-IN2P3); are other NGIs reorganising as well? how? do you have suggestions?
  • do you have suggestions to improve the EGI Operations meeting itself or to improve the distribution of the items discussed/reported between EGI Operations and OMB?

Next meeting