Falcon LogScale 1.240.0 GA (2026-05-12)
| Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Downgrades To? | Config. Changes? |
|---|---|---|---|---|---|---|---|---|
| 1.240.0 | GA | 2026-05-12 | Cloud | Next LTS | No | 1.177.0 | 1.177.0 | No |
Hide file download links
Download
Use docker pull humio/humio-core:1.240.0 to download the latest version
Bug fixes and updates
Removed
Items that have been removed as of this release.
GraphQL API
The deprecated GraphQL mutations createScheduledSearch and updateScheduledSearch have been removed.
Deprecation
Items that have been deprecated and may be removed in a future release.
The following manuals have been moved to the archives:
The following manuals have been moved to the archives:
The userId parameter for the updateDashboardToken GraphQL mutation has been deprecated and will be removed in version 1.273.
rdns()has been deprecated and will be removed in version 1.249. UsereverseDns()as an alternative function.
New features and improvements
Configuration
Uploads and downloads now use separate queues with a separate concurrency limit for each. The following configuration options have been added:
S3_STORAGE_MAX_CONCURRENT_UPLOADS- Controls the maximum concurrency of uploads to bucket storage. Defaults to one slot for every two CPU cores.S3_STORAGE_MAX_CONCURRENT_DOWNLOADS- Controls the maximum concurrency of downloads from bucket storage. Defaults to one slot for every two CPU cores.S3_STORAGE_TRANSFER_THREAD_POOL_SIZE- Controls the pool size for the shared thread pool used to execute uploads and downloads. Defaults to 50% of the node's CPU cores.
Some parts of the transfer process may be CPU-intensive, for example handling segment encryption. The concurrency of this work is controlled via the thread pool size. It is recommended to leave this at its default value, since permitting too much CPU-intensive work for bucket transfers at a time can be disruptive to the rest of the system.
The
S3_STORAGE_CONCURRENCYsetting, and similar settings for other bucket providers, is deprecated for removal in version 1.252.0. To ease migration,S3_STORAGE_MAX_CONCURRENT_UPLOADSandS3_STORAGE_MAX_CONCURRENT_DOWNLOADSwill use the value ofS3_STORAGE_CONCURRENCYas a default if the latter is configured. These changes also apply to the GCP and AZURE bucket types in addition to the S3 bucket type.
Fixed in this release
Queries
Fixed an issue where some very permissive regular expressions would cause subsequent results highlighting to exhaust a node's available memory.
Fixed an issue where very long regular expressions (greater than 10,000 characters) would cause a query to fail.
Fixed an issue where multi-cluster search queries were not correctly reflecting that they had been stopped. This occurred in cases where queries were stopped before all dependencies were ready, such as
defineTable()subqueries or files.
Known Issues
Storage
For clusters using secondary storage where the primary storage on some nodes in the cluster may be getting filled (that is, the storage usage on the primary disk is halfway between
PRIMARY_STORAGE_PERCENTAGEandPRIMARY_STORAGE_MAX_FILL_PERCENTAGE), those nodes may fail to transfer segments from other nodes. The failure will be indicated by the error java.nio.file.AtomicMoveNotSupportedException with message "Invalid cross-device link".This does not corrupt data or cause data loss, but will prevent the cluster from being fully healthy, and could also prevent data from reaching adequate replication.
Improvement
Storage
Reworked bucket storage concurrency controls to provide better granularity. Bucket storage uploads and downloads previously shared the same concurrency limit (
S3_STORAGE_CONCURRENCY) and used a shared queue where uploads always received priority over downloads.
Configuration
Added the dynamic configuration option
QuerySchedulerMaxCpuMsPerTimeSlice, which controls how much CPU time a chunk is allowed to take before attemptingdeferral of the remaining process. The default is 1,000 milliseconds.
Queries
Implemented the ability to stop work mid-chunk in the query scheduler, in order to switch between queries more responsively when slow queries are running. This behavior can be opted out of via the
AllowQuerySchedulerToBailOnSlowChunksfeature flag, which is planned for removal in a future version.
Metrics and Monitoring
Added the metric query-segment-chunk-deferred. The query scheduler executes queries by scanning each segment in portions of a particular byte size (chunks, consisting of a number of blocks) and is only able to make prioritization decisions between chunks. If a chunk takes too long, the scheduler may stop execution part way through and defer the rest of the work for later. This allows the scheduler to context switch to other queries, even when a very slow query is present where chunks take a long time. This metric counts how many times that kind of deferment occurs, which is an indicator of the presence of one or more very slow queries.
Added the metric block-count-in-chunk, which counts the number of blocks included in each segment chunk for segments being read during queries.
The following changes have been made to metrics:
bucket-storage-transfer-free-slots has been replaced by bucket-storage-upload-free-slots and bucket-storage-download-free-slots.
node-to-node-transfer-free-slots has been renamed to node-to-node-download-free-slots.