Falcon LogScale 1.211.0 GA (2025-10-21)

Version?Type?Release Date?Availability?End of SupportSecurity UpdatesUpgrades From?Downgrades To?Config. Changes?
1.211.0GA2025-10-21

Cloud

Next LTSNo1.150.01.177.0No

Available for download two days after release.

Hide file download links

Show file download links

Bug fixes and updates

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The EXTRA_KAFKA_CONFIGS_FILE configuration variable has been deprecated and planned to be removed no earlier than version 1.225.0. For more information, see RN Issue.

  • rdns() has been deprecated and will be removed in version 1.249. Use reverseDns() as an alternative function.

New features and improvements

  • Security

    • Added new environment variable SAML_METADATA_ENDPOINT_URL, allowing users to specify where LogScale will fetch the IdP signing certificate. This provides an alternative to using SAML_IDP_CERTIFICATE and SAML_ALTERNATIVE_IDP_CERTIFICATE, and enables easier certificate management without having to restart LogScale with a new set of variables.

      The existing certificate configuration options remain available, and when both methods are specified, certificates from both sources will be used.

  • Storage

    • Move bucket storage actions (for example, writing data to disk after bucket download, encryption/decryption when applicable) to a dedicated threadpool. This should result in less blocking on the threadpool responsible for handling HTTP requests (which could lead nodes to becoming unresponsive).

    • Added support for archiving ingested logs to Azure Storage. Logs that are archived using Azure Storage are available for further processing in any external system that integrates with Azure.

      Users can configure Azure Storage archiving options using the following optional settings in the Egress repository:

      • Bucket (required) – destination bucket for archived logs

      • Format – choose between NDJSON or Raw formatting for the stored file (default: NDJSON)

      • Archiving start – select between archiving all segments or only those starting after a specified UTC timestamp

      For more information, see Azure Archiving.

  • API

    • Extended a user's ability to control lookup file management with the creation of two REST API endpoints, filefromquery and fileoperation. Also extended the existing REST API endpoint file to support PATCH operations, and provide the ability for users to update existing files. Previously, users could only replace them in their entirety.

      The endpoint filefromquery will provide the following functionality:

      • Support for creating and updating lookup files directly from the Save dropdown menu in the search results by clicking Lookup file, see Create a lookup file in the Search interface for more information.

      • Support for updating lookup files via extensions to an existing file's REST API.

      The endpoint fileoperation will provide the following functionality:

      • Allows users to view the progress of operations started on other endpoints.

      • Updates the state of PATCH operations on the files endpoint.

      For more information, see Lookup API.

  • Functions

    • Added two new functions for calculating edit (Levenshtein) distances:

      • text:editDistance() – returns the edit distance between target and reference strings, capped at maxDistance

      • text:editDistanceAsArray() – returns an object array containing edit distances between a target string and multiple reference strings

Fixed in this release

  • Other

    • Fixed an issue where the process to delete messages from the ingest queue would sometimes trigger the error Skipping Kafka event deletion for this round since stripping topOffsets failed during the calculation phase without cause.

Known Issues

  • Storage

    • For clusters using secondary storage where the primary storage on some nodes in the cluster may be getting filled (i.e. the storage usage on the primary disk is halfway between PRIMARY_STORAGE_PERCENTAGE and PRIMARY_STORAGE_MAX_FILL_PERCENTAGE), those nodes may fail to transfer segments from other nodes. The failure will be indicated by the error java.nio.file.AtomicMoveNotSupportedException with message "Invalid cross-device link".

      This does not corrupt data or cause data loss, but will prevent the cluster from being fully healthy, and could also prevent data from reaching adequate replication.