Difference between revisions of "Preview 2.24.0"
(Created page with "Category:Middleware Back to [https://wiki.egi.eu/wiki/Preview_Repository#Repository_configuration repository configuration] = APEL Client/Server 1.8.1 = APEL is an accounti...")
|(21 intermediate revisions by the same user not shown)|
|Line 13:||Line 13:|
[https://github.com/apel/apel/releases Release notes]:
[https://github.com/apel/apel/releases Release notes]:
a for .
Latest revision as of 15:54, 13 August 2019
Back to repository configuration
APEL Client/Server 1.8.1
APEL is an accounting tool that collects accounting data from sites participating in the EGI and WLCG infrastructures as well as from sites belonging to other Grid organisations that are collaborating with EGI, including OSG, NorduGrid and INFN.
The accounting information is gathered from different sensors into a central accounting database where it is processed to generate statistical summaries that are available through the EGI/WLCG Accounting Portal.
Statistics are available for view in different detail by Users, VO Managers, Site Administrators and anonymous users according to well defined access rights.
More information on the APEL page.
- [client] Added option to update benchmarks/spec levels using a local configuration option rather than the BDII.
Secure STOMP Messenger (SSM) is designed to simply send messages using the STOMP protocol or via the ARGO Messaging Service (AMS). Messages are signed and may be encrypted during transit. Persistent queues should be used to guarantee delivery.
SSM is written in Python. Packages are available for RHEL 6 and 7, and Ubuntu Trusty.
The installation and configuration guide is available here: https://github.com/apel/ssm
Check also the EGI wiki for more information about APEL SSM.
- Added support for sending and receiving messages using the ARGO Messaging Service (AMS).
- Added option to send messages from a directory without needing to conform to the file naming convention that the dirq module requires.
- Fixed SSM hanging if certificate is not authorised with the broker. Now it will try other brokers if available and then correctly shut down.
- Fixed an OpenSSL 1.1 syntax error by including missing argument to checkend.
The Advanced Resource Connector (ARC) middleware is an Open Source software solution to enable distributed computing infrastructures with the emphasis on processing large volumes of data. ARC provides an abstraction layer over computational resources, complete with input and output data movement functionalities. The security model of ARC is identical to that of Grid solutions, relying on delegation of user credentials and the concept of Virtual Organisations. ARC also provides client tools, as well as API in C++, Python and Java.
We are happy to announce the ARC 6.1.0 release, the first update of the ARC 6 series.
ARC 6.1.0 contains several bug fixes and the release also includes enhancements and new features as can be seen from the merge requests listed below.
Sites using SLURM and the system-installed ENV/PROXY must add a slurm_requirements field in arc.conf in ARC 6.1.0. This is to mitigate a discovered bug that prevents the ENV/PROXY RTE updated in ARC 6.1.0 from working correctly. Under the [lrms] block, please add the line:
slurm_requirements = --export=NONE
Note also that if you are using a patched submit-SLURM-job script, please make sure to update the base script using the one issued with ARC 6.1.0.
Find more details at http://www.nordugrid.org/arc/releases/6.1/release_notes_6.1.html
Future Support of ARC 5-series
Now that ARC 6.1.0 is released, we will only provide security updates of ARC 5.
- No new feature development is planned or going on for ARC5 and no bug-fixing development will happen on ARC5 code base in the future except for security issues.
- Security fixes for ARC5 will be provided till end of June 2020.
- Production Sites already running ARC 5 will be able to get deployment and configuration troubleshooting help via GGUS till end June 2021. This we call "operational site support".
- ARC5 is available in EPEL7 and will stay there. EPEL8 will only contain ARC 6.
davix is a C++ toolkit for advanced I/O on remote resources with HTTP based protocols.
dCache provides a system for storing and retrieving huge amounts of data, distributed among a large number of heterogeneous server nodes, under a single virtual filesystem tree with a variety of standard access methods.
Detailed release notes on the official product site: https://www.dcache.org/downloads/1.9/release-notes-4.2.shtml
The Disk Pool Manager (DPM) is a lightweight storage solution for grid sites. It offers a simple way to create a disk-based grid storage element and supports relevant protocols (SRM, gridFTP, RFIO) for file management and access. It focus on manageability (ease of installation, configuration, low effort of maintenance), while providing all required functionality for a grid storage solution (support for multiple disk server nodes, different space types, multiple file replicas in disk pools)
N.B. starting from this version, the gridftp and http frontends are build together with the dpm core, so they have new package names
- lcgdm-dav -> dmlite-apache-httpd
- dpm-dsi -> dmlite-dpm-dsi
as well as the xrootd frontend starting from v 1.12: dpm-xrootd -> dmlite-dpm-xrootd
The Dynamic Federations system allows to expose via HTTP and WebDAV a very fast dynamic name space, built on the fly by merging and caching (in memory) metadata items taken from a number of (remote) endpoints. More information.
Release notes: http://lcgdm.web.cern.ch/dynafed-150-released-epel
The frontier-squid software package is a patched version of the standard squid http proxy cache software, pre-configured for use by the Frontier distributed database caching system. This installation is recommended for use by Frontier in the LHC CMS & ATLAS projects, and also works well with the CernVM FileSystem. Many people also use it for other applications as well
GFAL (Grid File Access Library ) is a C library providing an abstraction layer of the grid storage system complexity. The version 2 of GFAL tries to simplify at the maximum the file operations in a distributed environment. The complexity of the grid is hidden from the client side behind a simple common POSIX API.
Detailed release notes at http://dmc.web.cern.ch/release/gfal2-2.16.3