Falcon LogScale 1.207.1 LTS (2025-10-16)
| Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Downgrades To? | Config. Changes? |
|---|---|---|---|---|---|---|---|---|
| 1.207.1 | LTS | 2025-10-16 | Cloud On-Prem | 2026-10-31 | Yes | 1.150.0 | 1.177.0 | No |
Hide file download links
Download
Use docker pull humio/humio-core:1.207.1 to download the latest version
Hide file hashes
These notes include entries from the following previous releases: 1.207.0, 1.206.0, 1.205.0, 1.204.0, 1.203.0, 1.202.0
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Automation and Triggers
From version 1.219.0, LogScale will enforce a new limit of at most 10 actions per trigger (alert or scheduled search). Any existing trigger violating the limit will continue to run, but if you edit the trigger, you will be forced to restrict the number of actions to 10.
Configuration
The configuration option
VERIFY_CRC32_ON_SEGMENT_FILES(default:true) which can be used to disable Cyclic Redunancy Check (CRC) verification when reading segments is planned to be removed in version 1.213.
Removed
Items that have been removed as of this release.
GraphQL API
Removed the deprecated GraphQL field isValidFilterAlertQuery on the type
queryAnalysisreturned from the queryAnalysis GraphQL query.
Deprecation
Items that have been deprecated and may be removed in a future release.
A system metric used in the
Fleet overviewinterface is now deprecated. New collectors communicating with Fleet management will instead ship two new separate metrics: one containing errors and another containing log sources information. This allows for only shipping the information if anything has changed - thereby reducing load.The
EXTRA_KAFKA_CONFIGS_FILEconfiguration variable has been deprecated and planned to be removed no earlier than version 1.225.0. For more information, see RN Issue.
rdns()has been deprecated and will be removed in version 1.249. UsereverseDns()as an alternative function.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Automation and Triggers
Logs regarding the status of the scheduled search job now have the fields
category=ScheduledSearchandsubCategory=Job, instead of havingcategory=Joband nosubCategory.For more information, see Query Scheduling.
Storage
Changed the default value for
AUTOSHARDING_MAXto 12,288 from 131,072 for a more conservative approach to prevent datasource explosion in Global Database. The new default value is based on observed autoshard maximums in cloud environments.Configuration
The
AUTOSHARDING_MAXconfiguration variable is no longer deprecated. It is retained as a safety measure against unlimited autoshard creation.Dashboards and Widgets
Event list's format column controls and field interactions that might alter the visualization or the query behind it have now been made inaccessible on dashboards.
Queries
Changed behavior to respond with ServiceUnavailable when all query coordinators are unreachable, instead of starting queries on the receiving node. This allows users to retry later rather than attempting queries that are likely to fail due to network issues or other problems.
Metrics and Monitoring
Metrics backed by exponential decay will now clear values if no new metrics arrive within 5 minutes (the bias period of the weighted metrics) rather than showing the same value until new data arrives.
Functions
The
correlate()function now consistently selects the earliest candidate events first, based on either @timestamp or @ingesttimestamp depending on query submission parameters.For more information, see
correlate().
Upgrades
Changes that may occur or be required during an upgrade.
Installation and Deployment
Upgraded the bundled JDK from version 24.0.1 to 24.0.2.
Upgraded the Kafka client version to 4.1.0. This upgrade does not affect Kafka server version compatibility.
New features and improvements
GraphQL API
Added updateDashboardFromTemplate, updateParserFromTemplate and updateSavedQueryFromTemplate GraphQL mutations to allow the updating of dashboards, parsers, and saved queries using their YAML representation.
Added new parameter
allowInPlaceMigrationfor the addOrganizationForBucketTransfer GraphQL mutation. When set totrue, this bypasses bucket upload overwrite checks for S3 and Azure, enabling in-place segment migrations. This behavior is unchanged for Google Cloud Storage (GCS), as it does not implement these checks.For more information, see addOrganizationForBucketTransfer() .
Configuration
The PDF Render Service now supports TLS/HTTPS connections for enhanced security. This allows the service to operate in secure environments with encrypted communication.
The following environment variables enable the TLS feature:
TLS_ENABLED- Set totrueto enable HTTPS modeTLS_CERT_PATH- Path to TLS certificate fileTLS_KEY_PATH- Path to TLS private key fileTLS_CA_PATH- Optional CA certificate path
When TLS is enabled, the service supports TLS versions 1.2 and 1.3. The service maintains backward compatibility - when
TLS_ENABLEDis not set orfalse, it operates in HTTP mode.This enhancement improves security for Schedule PDF Reports generation in enterprise environments requiring encrypted connections.
For more information, see PDF Render Service.
Added endpoint override for the secret manager integration used for Azure ingest:
For the secret manager client, endopoint is configured with:
SECRET_MANAGER_CLIENT_HOST_ENDPOINT_OVERRIDE,SECRET_MANAGER_CLIENT_PORT_ENDPOINT_OVERRIDE, andSECRET_MANAGER_CLIENT_PROTOCOL_ENDPOINT_OVERRIDEfor the sts client, endpoint is configured with:
SECRET_MANAGER_STS_HOST_ENDPOINT_OVERRIDE,SECRET_MANAGER_STS_PORT_ENDPOINT_OVERRIDE, andSECRET_MANAGER_STS_PROTOCOL_ENDPOINT_OVERRIDE
Added new configuration variable
MAXMIND_USE_HTTP_PROXYto control whether MaxMind database downloads for query functionsasn()andipLocation()should use the configured HTTP proxy. The default is to use the proxy, which is the same behaviour as before this change.For more information, see HTTP Proxy Client Configuration, MaxMind Configuration.
Ingestion
The Parser editor now reports parser errors if the function does not set @error_msg[] but only @error_msg. This solves an issue related to the
parseCEF()function.Parser errors that were previously not displayed as errors are now correctly indicated within the parser editor.
For more information, see Errors, Validation Checks, and Warnings.
Dashboards and Widgets
The
Time Chart,Bar Chart,Pie Chart,Scatter Chart, andSankeywidgets now support multiple color palettes for differentiating between series.
Metrics and Monitoring
Added new metrics
starvation-timer-<thread-pool>-<tid>andduration-timer-<thread-pool>-<tid>for default dispatchers, providing more detailed thread pool behavior analysis.Added new metrics to track the total time spent on segment operations:
decompress-segment-query-total: total time spent on segment decompression for queries
load-segment-query-total: total time spent on segment loading for queries
Added additional node-level metrics to the humio-metrics option time-livequery, which measures the amount of CPU time used in static searches as a fraction of wall clock time:
time-query-decompress-segment
time-query-read-segment
time-query-map-segment
Added a new gauge metric
build_infowith a label named version containing the full build version. Value is a constant of 1.
Functions
The
findTimestamp()function now includes a newtimezoneFieldparameter, which provides dynamic timezone handling. This allows you to:Specify a field containing the default timezone for timestamps that lack timezone information
Use the same parser across multiple datasources with different default timezone.
Introduced a new function
text:substring()that can extract a substring of a string based on the supplied indices.Introduced a new function
text:positionOf(), which finds the position of a given character or substring within a string. Useful in conjunction withtext:substring().Added a new function
text:length(), which calculates the length of a string. Useful in conjunction withtext:substring.Added a
timezoneFieldparameter toparseTimestamp(). This allows you to provide a dynamic default timezone for when the event's timestamps do not contain a timezone. You do this by specifying a field that contains the default timezone. This allows for the same parser to be used in contexts that do not share the same static default timezone, for instance when parsing events from different log sources.Additionally, a deprecation warning has been added for the use of the
timezoneparameter, as the behavior will change in the future to act as default timezone instead of an override value. That is, it will no longer overwrite what is parsed from the event's timestamp.
Fixed in this release
User Interface
Tables in the
Searchpage have been fixed for the following issues:Copying rows from multiple pages at different stages of a live query completion resulted in data inconsistency.
An infinite loading state occurred in static queries when trying to access pages that hadn't been fetched yet.
For static queries:
Disabled row selection (checkboxes disabled)
Added tooltip to inform that row selection is not available until query completion.
For live queries:
Row selection now limited to current page
Table updates automatically pause during row selection
Row deselection required to navigate between pages and re-enable table updates.
This fix and improvements prevent misleading comparisons between data captured at different processing stages, especially important when copying or analyzing results across multiple pages.
Fixed an issue where the progress calculation of queries using
defineTable()would incorrectly fluctuate, causing the progress bar in the search UI to move back and forth. Queries are now weighted evenly to ensure consistent progress tracking even if the work of a query is yet to be calculated.The Parameters top panel could be open as default even though it did not contain any parameters. This wrong behavior has now been fixed.
Automation and Triggers
Fixed two issues with scheduled searches:
A failure to update a scheduled search could cause it to get stuck and not run until cluster restart.
A deleted scheduled search could cause the scheduled search job to continuously log that it was waiting for the scheduled search to finish.
For more information, see Scheduled searches.
Storage
Fixed an issue where the logs indicating which query took the longest to process a segment would appear long after query completion. Logging will now be delayed by no more than 10 seconds.
For more information, see LogScale Internal Logging.
Fixed an issue where a race condition between start-up and digest assignment would prevent new nodes from receiving digest partitions. This change also makes partition release more efficient during node shutdown, potentially improving ingest latency during digest reassignment.
Fixed an issue where merging segments could use excessive memory when processing events with large numbers of distinct fields. LogScale will now limit memory usage by stopping field extraction optimization when too many distinct field names are encountered.
For more information, see Creating Segment files.
The Secondary Storage was unable to copy files larger than 2GB, due to file corruption in transit, which caused the storage to leave such files on the primary storage device only. This issue has now been fixed.
GraphQL API
Fixed an issue where the GraphQL mutation createPersonalUserTokenV2 would fail with an unspecified error message.
For more information, see createPersonalUserTokenV2() .
Dashboards and Widgets
Fixed an issue that allowed users to save a dashboard using the FixedList Parameter without a defined value, causing dashboard exports to fail.
Fixed an issue where invalid values and label lists for the FixedList Parameter type would not trigger the Save Dashboard with Invalid Changes warning.
Extended the
Single Valuewidget to be compatible with query results containing one or two (for multiple grid visualization)groupBy()fields, wheregroupBy()fields make up the entire set.For more information, see Displaying Values in a Grid.
Queries
Fixed an issue where a race condition was created between live query submission and digest start, in which the static part assigned to a worker cluster would be omitted if a live query coordinator submitted work to a worker cluster, starting a new digest session.
For more information, see Digest Rules.
Fixed an issue where certain regex patterns that could not be compiled by the JitRex engine would lead to very slow query submission and excessive resource usage.
For more information, see Regular Expression Syntax.
Fixed the computation of digest flow information returned as part of query metadata. This information indicates which ingest timestamps are reliably included in the search result.
The changes primarily affect historic queries where the digest information is now fixed at query submission time, whereas previously it kept being updated on each poll. This was incorrect because the set of events for the query is fixed on submission time.
For consumers, the main effect is that the returned values are now generally going to be further in the past than previously.
For live queries, the fixes relate to races between computation of results and computation of digest flow info. To address this digest flow info is now slightly more conservative than before.
When searching by ingest timestamp with interval (
start,end), events with ingest timestamp equal toendwould sometimes be incorrectly included. This wrong behavior has now been fixed.Fixed an issue where events would incorrectly remain unredacted when query strings used for redaction contained derived tags, such as #repo.
Fleet Management
The organization permission
ViewFleetManagementin Fleet management was not enough to see relevant pages. This issue has now been fixed.
Metrics and Monitoring
Fixed an issue with time unit conversions for meter values in internal metrics reporting (introduced in v1.196), where due to incorrect unit conversion, values were off by a factor of 10^9. Only internal metrics exports were affected - logged metrics and Prometheus metrics were unaffected. Histogram metric labels were also corrected to show as HISTOGRAM instead of TIMER.
The node-level metric load-segment-total has been fixed as the computation did not include the time spent loading segments for queries and segment merging.
For more information, see
load-segment-totalMetric.
Functions
Fixed rare cases where queries using
correlate()would appear to stall after the first iteration.
Other
Fixed an issue where concurrency issues when parsing YAML content occasionally caused threads to loop infinitely.
Known Issues
Storage
For clusters using secondary storage where the primary storage on some nodes in the cluster may be getting filled (i.e. the storage usage on the primary disk is halfway between
PRIMARY_STORAGE_PERCENTAGEandPRIMARY_STORAGE_MAX_FILL_PERCENTAGE), those nodes may fail to transfer segments from other nodes. The failure will be indicated by the error java.nio.file.AtomicMoveNotSupportedException with message "Invalid cross-device link".This does not corrupt data or cause data loss, but will prevent the cluster from being fully healthy, and could also prevent data from reaching adequate replication.
Improvement
User Interface
Improved query parameter detection accuracy for queries using parameters with special characters:
Now correctly identifies parameters with
.characters in the name (for example,?http.method).Properly detects quoted parameter names in default value syntax (for example,
?{"http.method"="GET"})
Improved the responsiveness of the Save searches panel by introducing breakpoints on initial width calculation and allowing more space for the query editor, especially on smaller screen sizes.
The Save searches panel now includes new functionality:
Panel now closes when clicking the or .
Query field resizes when panel is closed
Saved searches can be grouped by either Package, Labels, or Last modified.
Storage
Improved query planning performance for scenarios with many bucket-stored segments by implementing cached precomputed tables instead of expensive rendezvous hashing. The hash table size defaults to 10,000 rows and can be configured using
NUMBER_OF_ROWS_IN_SEGMENT_TO_HOST_MAPPING_TABLE.
GraphQL API
The assignedTasks and unassignedTasks fields of the
ClusterNodeGraphQL datatype now only show tasks relevant to the node's nodeRole, providing clearer information about active tasks.
Queries
Revised coordination of aggregate streaming queries to run on query coordinators instead of request-receiving nodes, preventing resource starvation and slow performance occurring when receiving nodes are improperly sized for query coordination.
Improved performance by compiling queries once instead of twice when starting alert jobs.
Improved query response handling by streaming large JSON responses. This enhancement allows responses to start streaming faster even when the entire response is very large, particularly helping in cases where responses previously hit request timeouts.
Improved the stability of multi-cluster search by implementing the retry logic for failed polls on certain types of exceptions.
Multi-cluster search worker clusters no longer execute the result calculation pipeline for multi-cluster queries. This eliminates external-function calls and reverse DNS calls on remote clusters in multi-cluster search queries, reducing resource consumption.
For more information, see Searches in a Multi-Cluster Setup.
Queries will now preferentially read segments from non-evicted hosts, avoiding reading data from hosts that are being decommissioned.
For more information, see Ingestion: Digest Phase.
Query workers under digest load now respond more quickly when canceling running queries.
Fleet Management
The Fleet management poll endpoint has been optimized to avoid parsing configuration files at poll time.
Metrics and Monitoring
Added new metrics to help monitor/diagnose segment fetching queue issues:
segment-fetching-trigger-queue-hit-full-after-global-scan-countersegment-fetching-trigger-queue-offer-from-global-scan-countersegment-fetch-requested-but-already-in-progresssegment-fetch-requested-but-upstream-has-been-deletedsegment-changes-job-trigger-full-global-scan-counter
Updated the default histogram implementation from
SlidingTimeWindowArrayReservoirtoLockFreeExponentiallyDecayingReservoirfor improved memory utilization in case of a high cardinality of metrics, or a high sample rate. The new implementation uses reservoir sampling with exponential decay, providing better performance under high concurrency while maintaining statistical accuracy.Note
Some metric values may shift from their former baselines due to statistical sampling.