Alert.png The wiki is deprecated and due to be decommissioned by the end of September 2022.
The content is being migrated to other supports, new updates will be ignored and lost.
If needed you can get in touch with EGI SDIS team using operations @ egi.eu.

UMD-1 UMD-1.1.0

From EGIWiki
Revision as of 14:40, 28 July 2011 by David (talk | contribs)
Jump to navigation Jump to search

UMD 1.1.0 : Additional information for provided products

While undergoing the EGI Software Provisioning process, some installation issues and known problems were detected and documented. This document provides the same information, in one single place, where site admins may learn about the provided software prior to installation or, with later updates to UMD -1, upgrading their systems.

Each section that follows will be linked by the relevant sections provided in the EGI Software Repository for this release of UMD.

emi.proxyrenewal.sl5.x86_64

  • Additional Details: No issues found during EGI's Software Provisioning.
  • Installation Notes: No issues found during EGI's Software Provisioning.
  • Known Issues: No issues found during EGI's Software Provisioning.


emi.mpi.sl5.x86_64

  • Additional Details: No issues found during EGI's Software Provisioning.
  • Installation Notes: No issues found during EGI's Software Provisioning.
  • Known Issues: It has been seen that CREAM/torque + MAUI is not able to execute parallel jobs when it requested more than one processor. It seems there is a bug that it had been corrected but it has appeared again. More information about the bug can be found on GGUS Ticket: https://ggus.eu/ws/ticket_info.php?ticket=57828

This is a torque/maui problem that affects to MPI jobs but is not exclusive of that kind of jobs.

    • Fine grained process mapping is not supported with Slurm or Condor schedulers.
    • Shared filesystem detection only works in Linux.



emi.unicore-uvos.sl5.x86_64

  • Additional Details: No issues found during EGI's Software Provisioning.
  • Installation Notes: No issues found during EGI's Software Provisioning.
  • Known Issues: No issues found during EGI's Software Provisioning.


emi.storm.sl5.x86_64

  • Additional Details: No issues found during EGI's Software Provisioning.
  • Installation Notes: Minor bugs found during the verification process:
  • Known Issues: No issues found during EGI's Software Provisioning.


emi.lb.sl5.x86_64

  • Additional Details: It was tested as a standalone server, and the basic job registration has been proven to work correctly with an EMI WMS and a glite3.1 WMS.
    • Therefore:
      • the lb should not work as a proxy ( don't define the GLITE_LB_TYPE) ,
      • and also define the GLITE_LB_SUPER_USERS variable to allow the access from the WMS node.
  • Installation Notes:
  • Known Issues: Issues found:
    • bdii is started when configured with yaim, but not included in gLiteservices to init upon restart. GGUS ticket: https://ggus.eu/tech/ticket_show.php?ticket=71448
      • If needed, use /sbin/service bdii restart to start bdii service.
    • Due to faulty DNS over IPv6 resolution in the c-ares library (v. 1.6.0), L&B does not work in most scenarios involving IPv6-only machines. Upgrade c-ares to 1.7.3 and relaunch yaim to get full IPv6 functionality.
    • Background purge works and return code is handled correctly. It also results in a clean termination of a slave, though, which is currently reported by the master to syslog with priority 'Error' despite being harmless (The actual wording is 'Slave \d exited with return code 1'). This does not pose a problem, just a false alarm for a person reading the logs.
    • In the case when the LB service is installed on a separate machine from WMS service, and configured to use gLite-3.1 WMS service. With this configuration, job cannot be purged from LB service. From UI machine the following warning message can be observed after output retrieval:
Warning - JobPurging not allowed
 (The Operation is not allowed: Unable to complete job purge)
  • Workaround:
    • This problem is solved by giving ADMIN_ACCESS to the WMS service at LB machine by adding the WMS DN in section action "ADMIN_ACCESS" of /etc/glite-lb/glite-lb-authz.conf file.

emi.wms.sl5.x86_64

  • Additional Details: This release solves the issue described in https://savannah.cern.ch/bugs/index.php?84155 , about the internal proxy structure.
  • Installation Notes: No issues found during EGI's Software Provisioning.
  • Known Issues:
    • User Documentation Some links are not working. API doc not available. GGUS ticket: https://ggus.eu/tech/ticket_show.php?ticket=71065
    • There is a savannah bug https://savannah.cern.ch/bugs/?82983 that will be solved in the next UMD update.
    • Yaim doesn’t write the complete FQAN for roles into file /etc/glite-wms/glite_wms_wmproxy.gacl So after launching yaim, you have to modify by hand that file adding the missing parts. For example, if in the groups.conf there is the line "/dteam/ROLE=lcgadmin":::sgm: the part to add in .gacl file is:
<entry> 
   <voms>  <fqan>/dteam/Role=lcgadmin/Capability=NULL</fqan>
   </voms>
   <allow>
     <exec/>
   </allow>
 </entry>