Falcon LogScale 1.191.0 GA (2025-06-03)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Downgrades To? | Config. Changes? |
---|---|---|---|---|---|---|---|---|
1.191.0 | GA | 2025-06-03 | Cloud | Next LTS | No | 1.150.0 | 1.177.0 | No |
Available for download two days after release.
Hide file download links
Download
Use docker pull humio/humio-core:1.191.0 to download the latest version
Advance Warning
The following items are due to change in a future release.
Functions
Starting from release 1.195, the query functions
asn()
andipLocation()
will display an error instead of a warning should an error occur with their external dependency. This change will align their behavior to functions using similar external resources, likematch()
,iocLookup()
, andcidr()
.
Deprecation
Items that have been deprecated and may be removed in a future release.
The
color
field on theRole
type has been marked as deprecated (will be removed in version 1.195).The setConsideredAliveUntil and
setConsideredAliveFor
GraphQL mutations are deprecated and will be removed in 1.195.The
lastScheduledSearch
field from theScheduledSearch
datatype is now deprecated and planned for removal in LogScale version 1.202. The newlastExecuted
andlastTriggered
fields have been added to theScheduledSearch
datatype to replacelastScheduledSearch
.The
EXTRA_KAFKA_CONFIGS_FILE
configuration variable has been deprecated and planned to be removed no earlier than version 1.225.0. For more information, see RN Issue.
Upgrades
Changes that may occur or be required during an upgrade.
Installation and Deployment
Upgraded the Kafka clients to 3.9.1.
New features and improvements
Automation and Alerts
New options are available in the UI for Scheduled searches:
Added the hourly frequency for running scheduled searches. Previously, only daily, weekly, and monthly schedules were available when selecting the
schedule configuration.Scheduled searches now use the
hourly configuration by default instead of cron expression.
For more information, see Schedule.
Ingestion
Custom ingest tokens are now generally available through the API (not in the UI). A minimum length restriction of 16 characters has been added for custom ingest tokens.
For more information, see Custom Tokens.
Functions
Introduced the new
reverseDns()
query function for performing reverse DNS lookups, intended to replace the oldrdns()
function.Administrators can control the function using the following configuration.
Dynamic configurations:
ReverseDnsDefaultTimeoutInMs
– Default timeout for resolving IPsReverseDnsDefaultLimit
– Default number of unique IPs resolvedReverseDnsMaxLimit
– Maximum allowed number of unique IPs resolvedReverseDnsConcurrentRequests
– Maximum number of concurrent requestsReverseDnsRequestsPerSecond
– Maximum number of requests per second
Configuration variables:
IP_FILTER_RDNS_SERVER
– IP filter for the allowed DNS serversIP_FILTER_RDNS
– IP filter for the allowed IPs that can be resolvedRDNS_DEFAULT_SERVER
– The default DNS server to be used
Fixed in this release
Administration and Management
Fixed incorrect registration of the segment-fetching-trigger-queue-size metric that was producing misleading values.
Automation and Alerts
Fixed a rare issue where information about the execution of Filter and Aggregate alerts could fail to be saved, potentially resulting in duplicate alerts.
Fleet Management
The
Fleet overview
page has been fixed as collectors with errors in log sources would incorrectly show the Okay status instead of ERROR.
Improvement
Installation and Deployment
Updated PDF Render Service dependencies to eliminate vulnerabilities.
User Interface
The legend title can now be enabled and added to the
Time Chart
widget.
Storage
Reduced the log level of
OutOfOrderSequenceExceptions
in the ingest pipeline from ERROR to WARN.These exceptions occur either due to data loss in Kafka (requiring Kafka administrator investigation) or, more likely, due to a timeout on message delivery, which will prompt the exception following the timed out message.
The log level for writes to Global Database remains at ERROR, as it will cause the node to crash.