Falcon LogScale 1.211.0 GA (2025-10-21)
| Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Downgrades To? | Config. Changes? |
|---|---|---|---|---|---|---|---|---|
| 1.211.0 | GA | 2025-10-21 | Cloud | Next LTS | No | 1.150.0 | 1.177.0 | No |
Available for download two days after release.
Hide file download links
Download
Use docker pull humio/humio-core:1.211.0 to download the latest version
Bug fixes and updates
Deprecation
Items that have been deprecated and may be removed in a future release.
The
EXTRA_KAFKA_CONFIGS_FILEconfiguration variable has been deprecated and planned to be removed no earlier than version 1.225.0. For more information, see RN Issue.
rdns()has been deprecated and will be removed in version 1.249. UsereverseDns()as an alternative function.
New features and improvements
Security
Added new environment variable
SAML_METADATA_ENDPOINT_URL, allowing users to specify where LogScale will fetch the IdP signing certificate. This provides an alternative to usingSAML_IDP_CERTIFICATEandSAML_ALTERNATIVE_IDP_CERTIFICATE, and enables easier certificate management without having to restart LogScale with a new set of variables.The existing certificate configuration options remain available, and when both methods are specified, certificates from both sources will be used.
Storage
Move bucket storage actions (for example, writing data to disk after bucket download, encryption/decryption when applicable) to a dedicated threadpool. This should result in less blocking on the threadpool responsible for handling HTTP requests (which could lead nodes to becoming unresponsive).
Added support for archiving ingested logs to Azure Storage. Logs that are archived using Azure Storage are available for further processing in any external system that integrates with Azure.
Users can configure Azure Storage archiving options using the following optional settings in the Egress repository:
Bucket (required) – destination bucket for archived logs
Format – choose between NDJSON or Raw formatting for the stored file (default: NDJSON)
Archiving start – select between archiving all segments or only those starting after a specified UTC timestamp
For more information, see Azure Archiving.
API
Extended a user's ability to control lookup file management with the creation of two REST API endpoints,
filefromqueryandfileoperation. Also extended the existing REST API endpointfileto support PATCH operations, and provide the ability for users to update existing files. Previously, users could only replace them in their entirety.The endpoint
filefromquerywill provide the following functionality:Support for creating and updating lookup files directly from the dropdown menu in the search results by clicking , see Create a lookup file in the Search interface for more information.
Support for updating lookup files via extensions to an existing file's REST API.
The endpoint
fileoperationwill provide the following functionality:Allows users to view the progress of operations started on other endpoints.
Updates the state of PATCH operations on the
filesendpoint.
For more information, see Lookup API.
Functions
Added two new functions for calculating edit (Levenshtein) distances:
text:editDistance()– returns the edit distance between target and reference strings, capped atmaxDistancetext:editDistanceAsArray()– returns an object array containing edit distances between a target string and multiple reference strings
Fixed in this release
Other
Fixed an issue where the process to delete messages from the ingest queue would sometimes trigger the error Skipping Kafka event deletion for this round since stripping topOffsets failed during the calculation phase without cause.
Known Issues
Storage
For clusters using secondary storage where the primary storage on some nodes in the cluster may be getting filled (i.e. the storage usage on the primary disk is halfway between
PRIMARY_STORAGE_PERCENTAGEandPRIMARY_STORAGE_MAX_FILL_PERCENTAGE), those nodes may fail to transfer segments from other nodes. The failure will be indicated by the error java.nio.file.AtomicMoveNotSupportedException with message "Invalid cross-device link".This does not corrupt data or cause data loss, but will prevent the cluster from being fully healthy, and could also prevent data from reaching adequate replication.