Alert.png The wiki is deprecated and due to be decommissioned by the end of September 2022.
The content is being migrated to other supports, new updates will be ignored and lost.
If needed you can get in touch with EGI SDIS team using operations @


From EGIWiki
(Redirected from Ibergrid-QR3)
Jump to navigation Jump to search
EGI Inspire Main page

Inspire reports menu: Home SA1 weekly Reports SA1 Task QR Reports NGI QR Reports NGI QR User support Reports

Quarterly Report Number NGI Name Partner Name Author
QR3 Ibergrid LIP & CSIC Esteban Freire García (CESGA)



Date Location Title Participants Outcome (Short report & Indico URL)
2/12/2010 Universitat Autònoma de Barcelona (UAB) - Bellaterra, Barcelona. 4ª Reunión Plenaria de la Red Española de e-Ciencia Most of the people at PIC, since it was located nearby

This was the first plenary meeting of the National e-science network in Spain (part of NGI-IBERGRID) after the start of EGI. It was celebrated in the UAB and co-organised by IFAE/PIC and UPV. Several topics regarding the Spanish NGI were discussed, from operational issues to VO management and organisation. A talk was given by one member of our group (G. Merino) to report on the experience of the LHC Grid:


Date Location Title Participants Outcome (Short report & Indico URL)
16-17 November 2010 Geneva Distributed Database Operations Workshop 2010 2 Regular workshop to discuss status and progress of the Database Services. These are very much linked to Core Grid services like FTS and LFC as of today.

5-10 December 2010 Geneva CMS Week 1 Reporting about the Tier1 status, plus the CMS system overall. Two big actions for the mid-term are discussed for the latter: migration from ProdAgent to WMSAgent and from DB2 to DB3.

Discussion about the work on FTS log parser. Plan is to make a web interface and automate it. Prepare a contribution to the EGI UF.

15/12/10 Córdoba (Spain) PkIRISGrid RAs managers meeting in Cordoba 1
24-26 January 2011 Bari CMS Storage and Data Access Evolution Workshop 1 Special workshop to discuss about the mid and long term plans of data management systems for CMS on the LHC Grid.


Publication title Journal / Proceedings title Journal references
Volume number

Pages from - to
Et al?
e-Referral and e-Science prototype infrastructure for hadron therapy Grid and e-Science: ATLAS data analysis and Medical Physics 18. 19 – 11 -2010 1.Faustin Roman
2.Gabriel Amoros


2.1. Progress Summary

1. The EGEE ROC SWE was decommission (GGUS ticket #64997)

2. All sites in NGI_IBERGRID have migrated from R-GMA MonBox to glite-APEL (GGUS ticket #62497)

3. The dteam VO was enforced in all sites

4. The CERN dteam VOMS was substituted by the Hellasgrid dteam VOMS in all sites (GGUS tickets #65315 and #66014)

5. Coordinated upgrade of all the WMS in the region, supporting IBERGRID VOS, before upgrading the IBERGRID VOMS server to gLite 3.2 (due to savannah bug #72185)

6. Working on a redundancy schema for core services running backend databases (LFC and VOMS)

7. Participating in the roll-out activities of the gLite 3.2 CreamCE, gLite 3.2 TopBDII, gLite 3.2 VOMS (LIP) and gLite 3.1 WMS (IFIC)

8. Upgrade of the regional Nagios to the last production release

9. Operation and upgrades of the regional operational dashboard.

10. Contribution to the deliverable “D4.1 - EGI Operations Architecture”

11. Contribution on a best effort basis to the Certification Manual documentation

2.2. Main Achievements

1. gLite-APEL migration completed

2. EGEE ROC SWE decommission completed

3. Support of dteam VO in all NGI_IBERGRID sites completed

4. The user support strategy of migration users from the application VOs to the macro IBERGRID VOs is finished

2.3. Issues and mitigation

Issue Description Mitigation Description
During December, the r-Nagios service in NGI_IBERGRID was unstable generating a large fraction of unknown information on December Availability / Reliability report. Several tickets were open due to these problems (for example, GGUS #65776) All the issues were solved when the latest r-NAGIOS production release was installed in January increasing the overall reliability of the service.
Additional problems were originated by the fact that r-Nagios was not using failover services This point was addressed reconfiguring the r-Nagios server with failover configurations for the multiple services the r-Nagios uses (WMS, TopBDII, VOMS, ...). An issue which is still not solved is the fact that r-Nagios still uses a unique myproxy server to store the nagios user credentials. If the server is down, it will prevent the user from renewing credential. r-Nagios developers have been contacted to check in this issue.