Falcon LogScale 1.239.0 GA (2026-05-05)
| Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Downgrades To? | Config. Changes? |
|---|---|---|---|---|---|---|---|---|
| 1.239.0 | GA | 2026-05-05 | Cloud | Next LTS | No | 1.177.0 | 1.177.0 | No |
Hide file download links
Download
Use docker pull humio/humio-core:1.239.0 to download the latest version
Bug fixes and updates
Deprecation
Items that have been deprecated and may be removed in a future release.
The following manuals have been moved to the archives:
The following manuals have been moved to the archives:
The userId parameter for the updateDashboardToken GraphQL mutation has been deprecated and will be removed in version 1.273.
rdns()has been deprecated and will be removed in version 1.249. UsereverseDns()as an alternative function.
Upgrades
Changes that may occur or be required during an upgrade.
Security
Upgraded Apache Log4j to version 2.25.4 to address security vulnerabilities.
New features and improvements
Configuration
Added the dynamic configuration option
QuerySchedulerMaxCpuMsPerTimeSlice, which controls how much CPU time a chunk is allowed to take before deferral of the remaining processing is attempted. The default is 1,000 milliseconds.
Metrics and Monitoring
The following metrics have been added:
query-segment-chunk-deferred - counts how many times the query scheduler stops execution part way through a chunk and defers the remaining work for later.
This is an indicator of the presence of one or more very slow queries. The query scheduler executes queries by scanning each segment in portions of a particular byte size (chunks, consisting of a number of blocks) and is only able to make prioritization decisions between chunks. If a chunk takes too long, the scheduler may stop execution part of the way through and defer the rest of the work, allowing it to context-switch to other queries.
block-count-in-chunk - counts the number of blocks included in each segment chunk for segments being read during queries.
Fixed in this release
Security
Fixed an issue where the validation protocol requiring group names to each be unique identifiers was not applied correctly for requests involving multiple nodes almost simultaneously.
Queries
Fixed an issue where updating a lookup file between query submission and execution could cause queries to fail unexpectedly.
Functions
Fixed an issue where link operator placement validation in the
correlate()function was not sufficiently strict. The subsequent resolution has also improved error message validation.Fixed an issue introduced in version 1.236 that caused
correlate()queries running more than one iteration to find no results. This occurred when the repository was configured to use tag grouping and the query made use of the tags.
Known Issues
Storage
For clusters using secondary storage where the primary storage on some nodes in the cluster may be getting filled (that is, the storage usage on the primary disk is halfway between
PRIMARY_STORAGE_PERCENTAGEandPRIMARY_STORAGE_MAX_FILL_PERCENTAGE), those nodes may fail to transfer segments from other nodes. The failure will be indicated by the error java.nio.file.AtomicMoveNotSupportedException with message "Invalid cross-device link".This does not corrupt data or cause data loss, but will prevent the cluster from being fully healthy, and could also prevent data from reaching adequate replication.
Improvement
Auditing and Monitoring
The fields orgId and CID have been added to activity logs for Schedule PDF Reports where available. Additionally, scheduledReportId, scheduledReportName, orgId, and CID are now also sent to the PDF Render Service for logging purposes when users request a PDF be rendered.
Functions
correlate()queries now use selective scanning. Instead of always scanning all data, the query engine selects which pipelines and segments to scan based on the query structure and data available on disk or bucket storage at the time of query submission. This can significantly improve performance for queries where only a subset of the data is relevant.This feature will be available once all nodes in the cluster are running at least version 1.239 and are not running multi-cluster search.
The default iteration limit for
correlate()has been raised from 5 to 10, and the maximum from 10 to 20, as selective scanning may require additional iterations to converge.
Packages
Backoff-retry logic has been added to LogScale's package return from disk functionality. Immediately after a new node is started, where packages may not be synchronized, this asynchronous process gives the cluster a chance to catch up.
Note
The operation to retrieve packages may still be subject to failure. However, the incidence of failure will now be lower.
A new validation to existing processes for ensuring adherence to maximum package file size during upload has been added. Package size is now checked on installation and updated in two ways:
As before, the package is checked against the environment variable
MAX_FILEUPLOAD_SIZE. If the package is larger than this value, it is rejected.As a new validation, if the package serializes to a size too large to be placed on LogScale's Kafka queue as determined by the default Kafka message size (2 MB), the package is also rejected with the error message The package is too large.
Note
The second validation does not change which packages can be installed. Packages exceeding the Kafka message size limit will still fail during installation. Instead, this change detects this error earlier, and provides a meaningful error message instead of an internal exception.