Falcon LogScale 1.207.1 LTS (2025-10-16)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Downgrades To? | Config. Changes? |
---|---|---|---|---|---|---|---|---|
1.207.1 | LTS | 2025-10-16 | Cloud On-Prem | 2026-10-31 | Yes | 1.150.0 | 1.177.0 | No |
Hide file download links
Download
Use docker pull humio/humio-core:1.207.1 to download the latest version
Hide file hashes
These notes include entries from the following previous releases: 1.206.0, 1.205.0, 1.204.0, 1.203.0, 1.202.0
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Configuration
The configuration option
VERIFY_CRC32_ON_SEGMENT_FILES
(default:true
) which can be used to disable Cyclic Redunancy Check (CRC) verification when reading segments is planned to be removed in version 1.213.
Deprecation
Items that have been deprecated and may be removed in a future release.
The
EXTRA_KAFKA_CONFIGS_FILE
configuration variable has been deprecated and planned to be removed no earlier than version 1.225.0. For more information, see RN Issue.
rdns()
has been deprecated and will be removed in version 1.249. UsereverseDns()
as an alternative function.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Automation and Triggers
Logs regarding the status of the scheduled search job now have the fields
category=ScheduledSearch
andsubCategory=Job
, instead of havingcategory=Job
and nosubCategory
.For more information, see Query Scheduling.
Storage
Changed the default value for
AUTOSHARDING_MAX
to 12,288 from 131,072 for a more conservative approach to prevent datasource explosion in Global Database. The new default value is based on observed autoshard maximums in cloud environments.Configuration
The
AUTOSHARDING_MAX
configuration variable is no longer deprecated. It is retained as a safety measure against unlimited autoshard creation.Queries
Changed behavior to respond with ServiceUnavailable when all query coordinators are unreachable, instead of starting queries on the receiving node. This allows users to retry later rather than attempting queries that are likely to fail due to network issues or other problems.
Functions
The
correlate()
function now consistently selects the earliest candidate events first, based on either @timestamp or @ingesttimestamp depending on query submission parameters.For more information, see
correlate()
.
Upgrades
Changes that may occur or be required during an upgrade.
Installation and Deployment
Upgraded the bundled JDK from version 24.0.1 to 24.0.2.
Upgraded the Kafka client version to 4.1.0. This upgrade does not affect Kafka server version compatibility.
New features and improvements
GraphQL API
Added new parameter
allowInPlaceMigration
for the addOrganizationForBucketTransfer GraphQL mutation. When set totrue
, this bypasses bucket upload overwrite checks for S3 and Azure, enabling in-place segment migrations. This behavior is unchanged for Google Cloud Storage (GCS), as it does not implement these checks.For more information, see addOrganizationForBucketTransfer() .
Configuration
The PDF Render Service now supports TLS/HTTPS connections for enhanced security. This allows the service to operate in secure environments with encrypted communication.
The following environment variables enable the TLS feature:
TLS_ENABLED
- Set totrue
to enable HTTPS modeTLS_CERT_PATH
- Path to TLS certificate fileTLS_KEY_PATH
- Path to TLS private key fileTLS_CA_PATH
- Optional CA certificate path
When TLS is enabled, the service supports TLS versions 1.2 and 1.3. The service maintains backward compatibility - when
TLS_ENABLED
is not set orfalse
, it operates in HTTP mode.This enhancement improves security for Schedule PDF Reports generation in enterprise environments requiring encrypted connections.
For more information, see PDF Render Service.
Added new configuration variable
MAXMIND_USE_HTTP_PROXY
to control whether MaxMind database downloads for query functionsasn()
andipLocation()
should use the configured HTTP proxy. The default is to use the proxy, which is the same behaviour as before this change.For more information, see HTTP Proxy Client Configuration, MaxMind Configuration.
Metrics and Monitoring
Added new metrics
starvation-timer-<thread-pool>-<tid>
andduration-timer-<thread-pool>-<tid>
for default dispatchers, providing more detailed thread pool behavior analysis.Added new metrics to track the total time spent on segment operations:
decompress-segment-query-total: total time spent on segment decompression for queries
load-segment-query-total: total time spent on segment loading for queries
Added additional node-level metrics to the humio-metrics option time-livequery, which measures the amount of CPU time used in static searches as a fraction of wall clock time:
time-query-decompress-segment
time-query-read-segment
time-query-map-segment
Functions
The
findTimestamp()
function now includes a newtimezoneField
parameter, which provides dynamic timezone handling. This allows you to:Specify a field containing the default timezone for timestamps that lack timezone information
Use the same parser across multiple datasources with different default timezone.
Fixed in this release
User Interface
Tables in the
Search
page have been fixed for the following issues:Copying rows from multiple pages at different stages of a live query completion resulted in data inconsistency.
An infinite loading state occurred in static queries when trying to access pages that hadn't been fetched yet.
For static queries:
Disabled row selection (checkboxes disabled)
Added tooltip to inform that row selection is not available until query completion.
For live queries:
Row selection now limited to current page
Table updates automatically pause during row selection
Row deselection required to navigate between pages and re-enable table updates.
This fix and improvements prevent misleading comparisons between data captured at different processing stages, especially important when copying or analyzing results across multiple pages.
Fixed an issue where the progress calculation of queries using
defineTable()
would incorrectly fluctuate, causing the progress bar in the search UI to move back and forth. Queries are now weighted evenly to ensure consistent progress tracking even if the work of a query is yet to be calculated.
Automation and Triggers
Fixed two issues with scheduled searches:
A failure to update a scheduled search could cause it to get stuck and not run until cluster restart.
A deleted scheduled search could cause the scheduled search job to continuously log that it was waiting for the scheduled search to finish.
For more information, see Scheduled searches.
Storage
Fixed an issue where the logs indicating which query took the longest to process a segment would appear long after query completion. Logging will now be delayed by no more than 10 seconds.
For more information, see LogScale Internal Logging.
Fixed an issue where a race condition between start-up and digest assignment would prevent new nodes from receiving digest partitions. This change also makes partition release more efficient during node shutdown, potentially improving ingest latency during digest reassignment.
Fixed an issue where merging segments could use excessive memory when processing events with large numbers of distinct fields. LogScale will now limit memory usage by stopping field extraction optimization when too many distinct field names are encountered.
For more information, see Creating Segment files.
GraphQL API
Fixed an issue where the GraphQL mutation createPersonalUserTokenV2 would fail with an unspecified error message.
For more information, see createPersonalUserTokenV2() .
Dashboards and Widgets
Fixed an issue that allowed users to save a dashboard using the FixedList Parameter without a defined value, causing dashboard exports to fail.
Fixed an issue where invalid values and label lists for the FixedList Parameter type would not trigger the Save Dashboard with Invalid Changes warning.
Extended the
Single Value
widget to be compatible with query results containing one or two (for multiple grid visualization)groupBy()
fields, wheregroupBy()
fields make up the entire set.For more information, see Displaying Values in a Grid.
Queries
Fixed an issue where a race condition was created between live query submission and digest start, in which the static part assigned to a worker cluster would be omitted if a live query coordinator submitted work to a worker cluster, starting a new digest session.
For more information, see Digest Rules.
Fixed an issue where certain regex patterns that could not be compiled by the JitRex engine would lead to very slow query submission and excessive resource usage.
For more information, see Regular Expression Syntax.
Fixed an issue where events would incorrectly remain unredacted when query strings used for redaction contained derived tags, such as #repo.
Metrics and Monitoring
Fixed an issue with time unit conversions for meter values in internal metrics reporting (introduced in v1.196), where due to incorrect unit conversion, values were off by a factor of 10^9. Only internal metrics exports were affected - logged metrics and Prometheus metrics were unaffected. Histogram metric labels were also corrected to show as HISTOGRAM instead of TIMER.
The node-level metric load-segment-total has been fixed as the computation did not include the time spent loading segments for queries and segment merging.
For more information, see
load-segment-total
Metric.
Other
Fixed an issue where concurrency issues when parsing YAML content occasionally caused threads to loop infinitely.
Improvement
User Interface
Improved query parameter detection accuracy for queries using parameters with special characters:
Now correctly identifies parameters with
.
characters in the name (for example,?http.method
).Properly detects quoted parameter names in default value syntax (for example,
?{"http.method"="GET"}
)
Improved the responsiveness of the Save searches panel by introducing breakpoints on initial width calculation and allowing more space for the query editor, especially on smaller screen sizes.
The Save searches panel now includes new functionality:
Panel now closes when clicking the
or .Query field resizes when panel is closed
Saved searches can be grouped by either Package, Labels, or Last modified.
Storage
Improved query planning performance for scenarios with many bucket-stored segments by implementing cached precomputed tables instead of expensive rendezvous hashing. The hash table size defaults to 10,000 rows and can be configured using
NUMBER_OF_ROWS_IN_SEGMENT_TO_HOST_MAPPING_TABLE
.
GraphQL API
The assignedTasks and unassignedTasks fields of the
ClusterNode
GraphQL datatype now only show tasks relevant to the node's nodeRole, providing clearer information about active tasks.
Queries
Revised coordination of aggregate streaming queries to run on query coordinators instead of request-receiving nodes, preventing resource starvation and slow performance occurring when receiving nodes are improperly sized for query coordination.
Improved performance by compiling queries once instead of twice when starting alert jobs.
Improved query response handling by streaming large JSON responses. This enhancement allows responses to start streaming faster even when the entire response is very large, particularly helping in cases where responses previously hit request timeouts.
Improved the stability of multi-cluster search by implementing the retry logic for failed polls on certain types of exceptions.
Multi-cluster search worker clusters no longer execute the result calculation pipeline for multi-cluster queries. This eliminates external-function calls and reverse DNS calls on remote clusters in multi-cluster search queries, reducing resource consumption.
For more information, see Searches in a Multi-Cluster Setup.
Queries will now preferentially read segments from non-evicted hosts, avoiding reading data from hosts that are being decommissioned.
For more information, see Ingestion: Digest Phase.
Query workers under digest load now respond more quickly when canceling running queries.
Metrics and Monitoring
Added new metrics to help monitor/diagnose segment fetching queue issues:
segment-fetching-trigger-queue-hit-full-after-global-scan-counter
segment-fetching-trigger-queue-offer-from-global-scan-counter
segment-fetch-requested-but-already-in-progress
segment-fetch-requested-but-upstream-has-been-deleted
segment-changes-job-trigger-full-global-scan-counter
Updated the default histogram implementation from
SlidingTimeWindowArrayReservoir
toLockFreeExponentiallyDecayingReservoir
for improved memory utilization in case of a high cardinality of metrics, or a high sample rate. The new implementation uses reservoir sampling with exponential decay, providing better performance under high concurrency while maintaining statistical accuracy.Note
Some metric values may shift from their former baselines due to statistical sampling.