Alert.png The wiki is deprecated and due to be decommissioned by the end of September 2022.
The content is being migrated to other supports, new updates will be ignored and lost.
If needed you can get in touch with EGI SDIS team using operations @ egi.eu.

EGI-InSPIRE:Switzerland-QR7

From EGIWiki
Jump to navigation Jump to search
Quarterly Report Number NGI Name Partner Name Author


1. MEETINGS AND DISSEMINATION

1.1. CONFERENCES/WORKSHOPS ORGANISED

Date Location Title Participants Outcome (Short report & Indico URL)

1.2. OTHER CONFERENCES/WORKSHOPS ATTENDED

Date Location Title Participants Outcome (Short report & Indico URL)
10.11.2011 Amsterdam NIL kickoff meeting 1 from Switzerland https://www.egi.eu/indico/materialDisplay.py?materialId=minutes&confId=659
28.11.2012 Bern Swiss Distributed Computing Day About 100 http://www.swing-grid.ch/event/516553-swiss-distributed-computing-day
24.1.2012 Amsterdam OMB Two from Switzerland https://www.egi.eu/indico/conferenceDisplay.py?confId=618


1.3. PUBLICATIONS

Publication title Journal / Proceedings title Journal references
Volume number
Issue

Pages from - to
Authors
1.
2.
3.
Et al?

2. ACTIVITY REPORT

2.1. Progress Summary

CSCS

  1. Preparing the move to the new Lugano building, which is almost ready. New hardware has been ordered and it is expected to move there by mid-april.

PSI

  1. Received new storage hardware (SGI SI5500 System with 2*60 3TB Disks, ca 270 TB of usable RAID6 space). Currently installing it.
  2. Further virtualization of services (directory services and central logging)
  3. Expecting delivery of additional compute nodes.
  4. new storage and compute nodes require massive restructuring of our network environment. Will be carried out in March.

2.2. Main Achievements

CSCS

  1. Move of most of production systems to KVM on SSD disks. This has notably increased the performance and reliability of the Grid related VMs.
  2. Installation of the new Interlagos CPU in 10 WNs, this increased to 32 the number of job slots per machine.
  3. Deployment of GPFS on previous Lustre hardware. This has notably increased the performance and stability of the scratch filesystem. GPFS seems to be way more performant than Luster for our current setup. Also, metadata is now exclusively on SSDs.
  4. Re-cabled part of the cluster switches and networks to prepare for the move.

2.3. Issues and mitigation

Issue Description Mitigation Description
CSCS: ARC-CEs seem unable to properly work with gLite BDII Waiting for new middleware update.