Falcon LogScale 1.210.0 GA (2025-10-14)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Downgrades To? | Config. Changes? |
---|---|---|---|---|---|---|---|---|
1.210.0 | GA | 2025-10-14 | Cloud | Next LTS | No | 1.150.0 | 1.177.0 | No |
Available for download two days after release.
Hide file download links
Download
Use docker pull humio/humio-core:1.210.0 to download the latest version
Bug fixes and updates
Advance Warning
The following items are due to change in a future release.
Configuration
Cached data files mode, which allows users to configure a local cache directory for segment files, has been deprecated and will be removed in version 1.225.0. This configuration is no longer recommended, as using a local drive with bucket storage generally provides better performance.
The associated configuration variables have also been deprecated and are planned for removal in version 1.225.0:
CACHE_STORAGE_DIRECTORY
CACHE_STORAGE_PERCENTAGE
CACHE_STORAGE_SOURCE
Deprecation
Items that have been deprecated and may be removed in a future release.
The
EXTRA_KAFKA_CONFIGS_FILE
configuration variable has been deprecated and planned to be removed no earlier than version 1.225.0. For more information, see RN Issue.
rdns()
has been deprecated and will be removed in version 1.249. UsereverseDns()
as an alternative function.
Upgrades
Changes that may occur or be required during an upgrade.
Installation and Deployment
Upgraded LogScale's Zstandard (ZSTD) compression library from version 1.5.6 to 1.5.7.
New features and improvements
User Interface
Added new styling option to adjust the size of axis and legend titles on time chart, pie chart, bar chart, scatter chart, and heat map widgets.
Functions
Added query function
matchAsArray()
, which matches multiple rows from a CSV or JSON file and adds them as object array fields. This is similar to thematch()
function but with the following key differences:Only supports ExactMatch mode
Adds multiple matches as structured arrays instead of creating separate events
Allows customization of the array name using the
asArray
parameter
The length of the structured arrays is limited at
nrows
. If the number of matches is larger thannrows
, then the last matchingnrows
are put in the structured array. This is similar to how thematch()
function deals with matches larger thannrows
.For more information, see
match()
, <parameter>nrows</parameter> parameter.
Fixed in this release
Storage
Fixed an issue causing unbounded creation of global snapshots in temporary directories during periods of poor bucket storage performance.
Queries
Fixed an issue where anchored time points would cause import/export of dashboards and saved queries to fail. New schema versions for dashboards and saved queries (0.23.0 and 0.60 respectively) will now allow advanced time interval syntax.
For more information, see Anchored Time Points - Syntax.
Functions
Fixed an issue where the
parseXml()
function would output arrays incompatible with array functions due to the lack of a0
element. Backward compatibility with existing queries is maintained by keeping the first element in the non-array field.For more information, see
parseXml()
.
Improvement
Ingestion
Added error logging for ingest queue progression issues. When the read offset metric for any ingest queue partition doesn't progress, logs will display an error message stating Ingest queue progress error: before providing the log data.
The criteria for an error message being provided are:
Ingest queue doesn't progress over a 10-minute period
Ingest queue shows no activity for over an hour
Note
LogScale clusters regularly send internal messages on every ingest partition. If the metric does not increase, there is an issue with the digester.
Queries
Digest nodes now measure wall-clock time instead of CPU time when updating live queries with events, improving performance and reducing CPU usage.
Note
This improvement may introduce slight variations in live cost measurements due to thread scheduling.
Metrics and Monitoring
Added new metrics for live query execution monitoring:
total-live-events: Provides an aggregate count of live events across all dataspaces
worker-live-queries: Provides the number of live queries currently running on the worker node
worker-live-dataspace-queries: Provides the total number of repository queries currently executing on the worker node