Falcon LogScale 1.219.0 GA (2025-12-16)
| Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Downgrades To? | Config. Changes? |
|---|---|---|---|---|---|---|---|---|
| 1.219.0 | GA | 2025-12-16 | Cloud | Next LTS | No | 1.150.0 | 1.177.0 | No |
Available for download two days after release.
Hide file download links
Download
Use docker pull humio/humio-core:1.219.0 to download the latest version
Bug fixes and updates
Breaking Changes
The following items create a breaking change in the behavior, response or operation of this release.
Automation and Triggers
LogScale now enforces a limit of 10 actions per trigger (alert or scheduled search). Existing triggers exceeding this limit will continue to run, but must comply with the limit when edited.
Advance Warning
The following items are due to change in a future release.
Security
Starting from LogScale version 1.237, support for insecure
ldapconnections will be removed. Self-Hosted customers using LDAP will only be able to useldapssecure connections.Queries
Due to various upcoming changes to LogScale and the recently introduced regex engine, the following regex features will be removed in version 1.225:
Octal notation
Quantification of unquantifiable constructs
Octal notation is being removed due to logic application difficulties and its tendency to make typographical errors easier to overlook.
Here is an example of a common octal notation issue:
regex/10\.26.\122\.128/In this example,
\122is interpreted as the octal escape forRrather than the intended literal122. Similarly, the.matches not just the punctuation itself but also any single character except for new lines.Any construction of
\xwherexis a number from 1 to 9 will always be interpreted as a backreference to a capture group. If the correponding capture group does not exist, it will be an error.Quantification of unquantifiable constructs is being removed due to lack of appropriate semantic logic, leading to redundancy and errors.
Unquantifiable constructs being removed include:
^(the start of string/start of line)
$(the end of string/end of line)
?=(a positive lookahead)
?!(a negative lookahead)?<= (a positive lookbehind)
<?<!> (a negative lookbehind)
\b(a word boundary)
\B(a non-word boundary)For example, the end-of-text construct
$*only has meaning for a limited number of occurrences. There can never be more than one occurrence of the end of the text at any given position, making elements like$redundant.A common pitfall that causes this warning is when users copy and paste a glob pattern like
*abc*in as a regex, but delimit the regex with start of text and end of text anchors:regex/^*abc*$/The proper configuration should look like this:
regex/abc/For more information, see LogScale Regular Expression Engine V2.
Removed
Items that have been removed as of this release.
Configuration
Removed the following deprecated configuration variables:
S3_STORAGE_FORCED_COPY_SOURCE
S3_BUCKET_STORAGE_PREFERRED_MEANS_FORCEDUsers previously using
S3_STORAGE_FORCED_COPY_SOURCEshould now useS3_STORAGE_PREFERRED_COPY_SOURCEinstead.
Deprecation
Items that have been deprecated and may be removed in a future release.
The
EXTRA_KAFKA_CONFIGS_FILEconfiguration variable has been deprecated and planned to be removed no earlier than version 1.225.0. For more information, see RN Issue.
rdns()has been deprecated and will be removed in version 1.249. UsereverseDns()as an alternative function.The Secondary Storage feature is now deprecated and will be removed in LogScale 1.231.0.
The Bucket Storage feature provides superior functionality for storing rarely queried data in cheaper storage while keeping frequently queried data in hot storage (fast and expensive). For more information, see Bucket Storage.
Please contact LogScale support for any concerns about this deprecation.
New features and improvements
Storage
Enabled new bucket queue implementation by default. It can be disabled via the
NewFileTransferQueuingfeature flag.
API
Added a new admin-level API for unsetting a segment's bucketId field. This is for segments that are on disk but not in bucket storage. In cases where a bucket storage has lost data, this API can be used to remove corresponding metadata from LogScale, ending repeated attempts to download the missing files.
Usage requires a POST call to the following endpoint, where bucketField specifies which bucket field to unset (e.g.,"primary" or "secondary"):
/api/v1/dataspaces/${dataspaceId}/datasources/${datasourceId}/segments/${segmentId}/unset-bucket-id?bucketField=${bucketField}Here's an example:
curl https://${clusterUrl}/api/v1/dataspaces/${dataspaceId}/datasources/${datasourceId}/segments/${segmentId}/unset-bucket-id&bucketField=primary -H "Authorization: Bearer ${token}"Added the parameter
queryKindto the GraphQL mutation analyzeQuery, which indicates what kind of query program is being validated/analyzed.Valid values for a standard search query are:
graphql{standardSearch: {} }Valid values for a filter-prefix are:
graphql{ filterPrefix: {} }
Queries
Added support in the LogScale Regular Expression Engine V2 for hexadecimal escape sequences up to 4 digits in length using the following formats:
\x{n}\x{nn}\x{nnn}\x{nnnn}
Note
Curly brackets are required for this syntax. This is in addition to the existing
\xnnand\unnnnnotations.Added support for repeated backreferences in the LogScale Regular Expression Engine V2 engine. For example, the pattern
regex(.)\1{2,3}can now be used to detect sequences of repeated characters.
Metrics and Monitoring
Added the metric
currently-submitted-fetches-for-queries, which measures the number of segment downloads the query scheduler is actively waiting to complete.This metric differs from
bucket-storage-fetch-for-query-queuein that the latter counts all fetches the scheduler is planning to do for currently running queries, including those the scheduler has not yet requested.
Fixed in this release
Storage
Fixed a bug in the ordering of segment downloads. Downloads for queries now get priority over other downloads.
An issue found in version 1.218.0 could cause bucket uploads to become stuck. This issue has now been fixed.
Queries
Fixed an issue where warnings produced when merging worker states, such as
groupBy()function limit breaches, were not consistently attached to a user's query results.
Fleet Management
Adjusted Fleet and Group Management processing to continue applying valid groups when encountering malformed filter queries. Previously, a single group with an invalid filter would prevent all subsequent groups from being processed.
Note
The user interface prevents creation of invalid filters, but filters created before LogScale v1.158.0 may contain malformed queries.
Packages
Fixed an issue where failed package installations or updates could incorrectly produce audit log events, indicating triggers were created or updated.
Known Issues
Storage
For clusters using secondary storage where the primary storage on some nodes in the cluster may be getting filled (i.e. the storage usage on the primary disk is halfway between
PRIMARY_STORAGE_PERCENTAGEandPRIMARY_STORAGE_MAX_FILL_PERCENTAGE), those nodes may fail to transfer segments from other nodes. The failure will be indicated by the error java.nio.file.AtomicMoveNotSupportedException with message "Invalid cross-device link".This does not corrupt data or cause data loss, but will prevent the cluster from being fully healthy, and could also prevent data from reaching adequate replication.
Improvement
Storage
AWS' Netty-based HTTP client is now the default for S3 bucket operations. It is also the default client for asynchronous operations in AWS SDK v2.
Users who wish to continue using Apache's Pekko HTTP client can revert by setting
S3_NETTY_CLIENTtoFALSE, then restarting the cluster.This implementation provides the following additional metrics for monitoring the client connection pool:
s3-aws-bucket-available-concurrency
s3-aws-bucket-leased-concurrency
s3-aws-bucket-max-concurrency
s3-aws-bucket-pending-concurrency-acquires
s3-aws-bucket-concurrency-acquire-duration
On clusters where non-humio thread dumps are available, it is also possible to look into the state of the client thread pool by searching for the thread name prefix
bucketstorage-netty.The client is set with default values originating from AWS' SDK Netty client. However, users can fine-tune the client further with the following environment variables:
Improved internal queueing logic for bucket uploads and downloads to adjust the order of transfer when there is contention. Transfer order is now as follows:
Segment uploads
Lookup file uploads
Segment downloads
Packages
Improved error messages for package assets violating the latest package schema to better identify which asset specifically is causing validation errors. Error messages now contain the name and type of the offending asset.