Alert.png The wiki is deprecated and due to be decommissioned by the end of September 2022.
The content is being migrated to other supports, new updates will be ignored and lost.
If needed you can get in touch with EGI SDIS team using operations @ egi.eu.

EGI-InSPIRE:Switzerland-QR15

From EGIWiki
Revision as of 16:12, 31 January 2014 by Shaug (talk | contribs)
Jump to navigation Jump to search
Main EGI.eu operations services Support Documentation Tools Activities Performance Technology Catch-all Services Resource Allocation Security


Inspire reports menu: Home SA1 weekly Reports SA1 Task QR Reports NGI QR Reports NGI QR User support Reports





Quarterly Report Number NGI Name Partner Name Author
QR 15 NG-CH Switzerland Sigve Haug


1. MEETINGS AND DISSEMINATION

1.1. CONFERENCES/WORKSHOPS ORGANISED

Date Location Title Participants Outcome (Short report & Indico URL)
30.9-01.10 CSCS, Lugano (CH) GridKa Cloud T1-T2 yearly face to face 25 ATLAS German cloud sites' technical solutions discussed. Direct contact between CSCS admins and ATLAS operation experts, https://indico.cern.ch/conferenceDisplay.py?confId=261676 -

1.2. OTHER CONFERENCES/WORKSHOPS ATTENDED

13 Dec 2013 # Edinburgh # DPM workshop # 25

Date Location Title Participants Outcome (Short report & Indico URL)
2013-12-13 Edinburgh DPM workshop 25
2013-10-23 Lausanne HPC-CH Forum Michael Rolli, Nico Faerber


1.3. PUBLICATIONS

Publication title Journal / Proceedings title Journal references
Volume number
Issue

Pages from - to
Authors
1.
2.
3.
Et al?’’
Grid Site Testing for ATLAS with HammerCloud CHEP2013 Proceedings Johannes Elmsheuser, Ludwig-Maximilians-Universitaet Muenchen

Federica Legger, Ludwig-Maximilians-Universitaet Muenchen Ramon Medrano Llamas, CERN Gianfranco Sciacca, Universitaet Bern Daniel Colin van der Ster, CERN

2. ACTIVITY REPORT

CSCS’’’

  • 2.1. Progress Summary
- Testing removal of /expirment_software mount on WN
  • 2.2. Main Achievements
- Completed migration SLURM on all CREAMs, ARCs, and WN
- Completed migration to dcache 2.6 
- Upgraded to postgres 9.3
- Moved NFS mounts to new NAS managed by CSCS storage team
- Allowed for file deletion over /pnfs mount
  • 2.3 Issues and Mitigation
- Working on publishing accounting to new APEL server. Currently working on issues relating to cream/slurm accoutning as well as jura
-  Infiniband switch died, replaced switch no major issues apart from the failed jobs

' PSI'

  • 2.1 Progress Summary
- Expanded SE by 360 TB raw storage by adding a NetApp E5460,
  same hardware as our SGI SI5500. Upgraded and homogenized all Firmware.
- Decision to continue our Solaris X4500 and X4540 for next one or two years based
  on availability of replacement parts (decommissioned machines from our Tier-2 at CSCS).
- Reinstalled a Sollaris network boot and install service
  (Jumpstart) to ease reinstallation of our Solaris10 machines.
  * Complete phasing out of the disks that had troubled us in the X4540 machines over
    the last two years (frequent failures) by using replacements from the decommissioned
    machines from our Tier-2. Reinstallation of all X4540 servers.
  2. Main Achievements
  * SE expansion (360 TB raw)
  * Ensuring a continued use of our existing aging HW by
        securing replacement parts and providing an adequately
        resilient Solaris infrastructure for fast reinstallations.
  3. Issues and Mitigation


UZH Nothing reported

UNIBE-ID

1. Progress Summary
Stable operations with minor issues reported below
2. Main Achievements
- UBELIX relocation: At end of August the whole cluster was relocated to a new server room at von Roll complex with only minor problems. After 8.5 days of downtime the cluster was fully operational again. Though there were no hardware defects right after the relocation, over the next few weeks three harddisks and one mainboard broke.

3. Issues and Mitigation
- ARC CE usage recrd registration to the smscg database stopped working due to duplicated job ids occuring in the smscg db. After deduplication with a small batch script the  usage records delivery works again and the accumulated use records where delivered too.
- At the new cluster location brand new switch from Brocade were installed. Since the relocation we are occasionally facing very short network link downs on two of those switches. Cases at Brocade are opened and the problem is investigated.


UNIGE-DPNC

  • Progress Summary
- Major update of the DPM SE, ongoing migration to SLC6, virtualization of
  central services, maintenance and stable operation.
  • Main Achievements
- Upgrade of the DPM SE
- 4 new disk servers (IBM x3630 M4, 43 TB for data) running SLC6 and added to the DPM
- 6 old Solaris disk servers drained from data and retired (2 reused for NFS)
- DPM software upgraded to 1.8.7
- WebDav and xrootd interfaces added
- Data access via xrootd tested and documented for users
- Reorganization of data in the SE for the new ATLAS DDM 'Rucio'
   renaming process run by Cedric Serfon for DDM ops using WebDav
   two failed attempts in Dec 2013 with 'Too many connections' errors; help from DPM experts not on target
   success in Jan 2014, with local jobs not running
- Preparing a funding request
- Yearly review of accounts
- Change of nearly all IP numbers, making room for growth
- A new web server running SLC6
- Upgrade of Ganglia monitoring to the version 3.1.7 (the one in SLC6) compiled from sources on the SLC5. 
   Preparing virtual machines for more central services (ARC CE and the batch server)
  • 2.3. Issues and mitigation
- Federated Data Access using Xrootd (FAX) no working yet. We lack support.
- Ganglia 3.1.7 does not compile on Solaris 10. No solution yet.
- One hardware problem - overheating hardware raid.


UNIBE-LHEP

 Quite stable operation of the production cluster (ce.lhep). Preparation for migration from CentOS5  to SLC6 and expansion with nodes obtained from CERN/ATLAS
 A new cluster with ~1500 cores (ce01.lhep) has been commissioned and operation stabilised. Some outstanding issues (details below)
* Main achievements
 - ce01.lhep cluster in full production for ATLAS. Commissioning of the Infiniband local area network
 - ce.lhep cluster has been operated with reasonable stability until early October. Shutdown for expansion and migration from SLC5 to SLC6. Added nodes from CERN/ATLAS, complete rationalised re-cabling (power and network).
 - ce.lhep decommissioned and re-installed as ce02.lhep (SLC6.4, ROCKS 6.1 Front-end), added to GOCDB. ATLAS SLC6 WN image prepared. ROCKS images for Lustre nodes (MDS, OSS) prepared, with Lustre 2.1.6. Ready for mass-install.
 - Enabled t2k.org VO on our Storage Element
* Issues and mitigations
 Recurring problems on the ce01.lhep cluster:
 - Frequent NIC lock-ups on lustre OSS nodes causing Lustre to hang and consequent cluster downtimes. Solution: switch LAN from TCP to Infiniband
 - CVMFS cache/partition full issue causes WN's to become black holes. Mitigation: manual cache clean-up executed from time to time. Foreseen solution: will re-install with CVMFS 2.1.5 which is said to resolve the bug
 - NFS v4 new defaults caused the inability of the ATLAS software manager jobs to validate the CVMFS software deployment. Mitigation: run on ATLAS request the validation manually from time to time. Solution: identified and corrected the setting causing the issue
 - One off failure of one PDU: the LAN switch for the ce01.lhep cluster affected (no redundant PSU): cluster hanging until power recovered
 Problems affecting ce.lhep production cluster:
 - Cooling instabilities have caused at least once a full cluster shutdown. A second instance saw only part of the WNs to shutdown spontaneously. Recovery implies re-installation
 - Some obscure issues with ROCKS and a Lustre build against the latest kernel available have delayed mass-install of the cluster. Issue: a critical ARC bug causes the services to stop processing jobs upon a Data Staging failure. Mitigation: none. Rely on Nagios email notification from the PPS EGI Nagios service to catch failures and react by restarting the services