Alert.png The wiki is deprecated and due to be decommissioned by the end of September 2022.
The content is being migrated to other supports, new updates will be ignored and lost.
If needed you can get in touch with EGI SDIS team using operations @ egi.eu.

Preview 2.27.0

From EGIWiki
Revision as of 09:15, 8 May 2020 by Apaolini (talk | contribs)
Jump to navigation Jump to search

Back to repository configuration

ARC 6.5.0 and 6.6.0

The Advanced Resource Connector (ARC) middleware is an Open Source software solution to enable distributed computing infrastructures with the emphasis on processing large volumes of data. ARC provides an abstraction layer over computational resources, complete with input and output data movement functionalities. The security model of ARC is identical to that of Grid solutions, relying on delegation of user credentials and the concept of Virtual Organisations. ARC also provides client tools, as well as API in C++, Python and Java.

Release notes: 6.5.0

This release comes with quite a large set of enhancements and improvements.

Bug 3841 (jobs piling up in FINISHING state) which one known site suffers with, is proving resilient. We believe that a busy file system hosting the job's session directory had strong negative impact on the data staging. This needs more investigation and future development to optimize the load on session directories.

However bug 3890 related to xrootd transfers, which had to be reverted in release 6.4.1, is now solved.

Of enhancements and new features we would like to highlight:

  • Arcctl:
    • arcctl can now be installed standalone without A-REX. It comes with the test CA and 3rd party deploy (CA certs, VOMS) on board.
    • As before you get the full arcctl if you install A-REX
  • Client area enhancements:
    • installation automated with standalone arcctl
    • new consistent and streamlined submission endpoint selection options
    • consistent target-area oriented globus plugins split
    • bash completion for client tools
    • documentation for clients!
  • ARCHERY management:
    • new JSON configuration for flexible services topology definition
  • Technology preview:
    • This release also includes the technology preview of a new functionality: The "Community-defined RTEs" ecosystem that enables automated software environment provisioning on ARC CEs by community software managers. The ecosystem consists of two layers: An ARCHERY-based software (RTE) registry and a new set of ARC control tool modules. More details can be found in documentation.

Release notes: 6.6.0

In this release we offer the very first technology preview of the handling of OIDC tokens in ARC. With it comes a new configuration option enabling the feature. Please see the documentation for how to use tokens with ARC.

Future Support of ARC 5-series

Now that ARC 6 is released, we will only provide security updates of ARC 5.

In particular:

  • No new feature development is planned or going on for ARC5 and no bug-fixing development will happen on ARC5 code base in the future except for security issues.
  • Security fixes for ARC5 will be provided till end of June 2020.
  • Production Sites already running ARC 5 will be able to get deployment and configuration troubleshooting help via GGUS till end June 2021. This we call "operational site support".
  • ARC5 is available in EPEL7 and will stay there. EPEL8 will only contain ARC 6.

CVMFS 2.7.2

The CernVM File System provides a scalable, reliable and low-maintenance software distribution service. It was developed to assist High Energy Physics (HEP) collaborations to deploy software on the worldwide-distributed computing infrastructure used to run data processing applications. CernVM-FS is implemented as a POSIX read-only file system in user space (a FUSE module). Files and directories are hosted on standard web servers and mounted in the universal namespace /cvmfs.

Release notes

CernVM-FS 2.7.2 is a bugfix release. Please find detailed release notes in the technical documentation.

dCache 5.2.20

dCache provides a system for storing and retrieving huge amounts of data, distributed among a large number of heterogeneous server nodes, under a single virtual filesystem tree with a variety of standard access methods.

Detailed release notes on the official product site: https://www.dcache.org/downloads/1.9/release-notes-5.2.shtml

frontier-squid 4.11.2

The frontier-squid software package is a patched version of the standard squid http proxy cache software, pre-configured for use by the Frontier distributed database caching system. This installation is recommended for use by Frontier in the LHC CMS & ATLAS projects, and also works well with the CernVM FileSystem. Many people also use it for other applications as well

Release notes: http://frontier.cern.ch/dist/rpms/frontier-squidRELEASE_NOTES

gfal2 2.17.2

GFAL (Grid File Access Library ) is a C library providing an abstraction layer of the grid storage system complexity. The version 2 of GFAL tries to simplify at the maximum the file operations in a distributed environment. The complexity of the grid is hidden from the client side behind a simple common POSIX API.

Detailed release notes at http://dmc.web.cern.ch/release/gfal2-2172

xrootd 4.11.3

XRootD software framework is a fully generic suite for fast, low latency and scalable data access, which can serve natively any kind of data, organized as a hierarchical filesystem-like namespace, based on the concept of directory. As a general rule, particular emphasis has been put in the quality of the core software parts.

Detailled release notes at https://github.com/xrootd/xrootd/blob/v4.11.3/docs/ReleaseNotes.txt