Falcon LogScale 1.195.1 LTS (2025-07-22)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Downgrades To? | Config. Changes? |
---|---|---|---|---|---|---|---|---|
1.195.1 | LTS | 2025-07-22 | Cloud On-Prem | 2026-07-31 | Yes | 1.150.0 | 1.177.0 | No |
Hide file download links
Download
Use docker pull humio/humio-core:1.195.1 to download the latest version
Hide file hashes
These notes include entries from the following previous releases: 1.195.0, 1.194.0, 1.193.0, 1.192.0, 1.191.0, 1.190.0
Bug fixes and updates.
Removed
Items that have been removed as of this release.
Configuration
Removed server compatibility checks from multi-cluster searches. These checks became obsolete due to some internal implementation changes occurred in past versions. The new behavior is described at Multi-Cluster Compatibility Across Versions.
Additional related changes:
Removed the
UNSAFE_RELAX_FEDERATED_PROTOCOL_VERSION_CHECK
environment variable.Deprecated the <gqlfield>remoteServerCompatVersion</gqlfield> field in the
RemoteClusterConnectionStatus
type (returned by the checkRemoteClusterConnection() GraphQL query).Will remove the <gqlfield>remoteServerCompatVersion</gqlfield> field no earlier than version 1.207, following the
ShortTerm
API stability deprecation policy.The
QueryBacktrackingLimit
feature flag has been removed. Use theQueryBacktrackingLimit
dynamic configuration to adjust the limit.Functions
As previously announced in RN Issue, the following functions can no longer be used after the first aggregate function:
.
For example, this query is no longer valid:
Invalid Example for Demonstration - DO NOT USElogscalegroupBy(class) | eventSize()
These functions can still be used before the first aggregate function:
logscaleeventSize() | tail(200)
This change is necessary as these functions require access to original events, which are not available post-aggregation.
Free-text search is no longer supported after the first aggregate function (as previously announced in RN Issue). For example, this query is no longer supported:
logscale Syntaxtail(200) | "Lorem ipsum dolor"
You can still search for strings in specific fields after aggregation:
logscale Syntaxtail(200) | msg="Lorem ipsum dolor"
Free-text search before the first aggregate function remains supported:
logscale"Lorem ipsum dolor" | tail(200)
Deprecation
Items that have been deprecated and may be removed in a future release.
The datasource-count metric has been deprecated and will be removed in version 1.201 of LogScale.
The information about the total number of datasources is available via the logs by the
GlobalSegmentStatsLoggerJob
in the datasources field. When a new datasource is created or marked as deleted, the total number of datasources is logged in the datasourceCount field.The
lastScheduledSearch
field from theScheduledSearch
datatype is now deprecated and planned for removal in LogScale version 1.202. The newlastExecuted
andlastTriggered
fields have been added to theScheduledSearch
datatype to replacelastScheduledSearch
.The
EXTRA_KAFKA_CONFIGS_FILE
configuration variable has been deprecated and planned to be removed no earlier than version 1.225.0. For more information, see RN Issue.
rdns()
has been deprecated and will be removed in version 1.249. UsereverseDns()
as an alternative function.The updateScheduledSearchV2 GraphQL mutation has been deprecated in favor of updateScheduledSearchV3, which now includes field <gqlfield>triggerOnEmptyResult</gqlfield> .
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Storage
Changed segment upload behavior to use the first available
ownerHosts
that is alive instead of just the firstownerHost
.Reverted a change from version 1.191.0 that increased the buffer size used for parsing global snapshots, as the change did not yield the expected performance improvements.
Configuration
Modified the behavior of
S3_STORAGE_PREFERRED_COPY_SOURCE
and related bucket provider variables. When enabled, these settings now completely disable node-to-node transfers within the cluster. All fetching between nodes will occur via bucket storage.This change better aligns with customer requirements for minimizing costs from node-to-node transfers in environments where such transfers are more expensive than bucket downloads.
The previous behavior can be maintained by setting
S3_BUCKET_STORAGE_PREFERRED_MEANS_FORCED=false
. Please inform Support should you need to use this option. This option will be removed in version 1.201.0 unless specific use cases require its retention.The previously undocumented
S3_STORAGE_FORCED_COPY_SOURCE
is now deprecated and will be removed in version 1.201.0. Users should useS3_STORAGE_PREFERRED_COPY_SOURCE
instead.Ingestion
Parse Data now only report missing lookup files when the query statement using the file is actually evaluated. For example, when using case branching with a missing lookup file that the event doesn't hit, no warning will be generated for the missing file.
Queries
Changed HTTP status code from 400 to 503 when a query fails to start due to internal errors, such as query queue being full.
Functions
asn()
andipLocation()
functions now throws errors (instead of warnings) in query contexts where there are issues with external dependencies. This matches the error handling behavior of functions that also use external dependencies, likematch()
andioc:lookup()
.When running on ingest time,
select()
now retains @ingesttimestamp internally, even when this field is not selected in the function. This way, functions that require @ingesttimestamp continue to work even if this field is not selected.For example, this query works correctly even without selecting @ingesttimestamp:
logscaleselect([foo, bar]) | tail(100)
Unless explicitly selected, @ingesttimestamp is not part of the query result. For instance:
logscaleselect([foo, bar, contextTimestamp]) | tail(200) | parseTimestamp(contextTimestamp, as=@ingesttimestamp)
This query outputs foo and bar fields only, but not @ingesttimestamp because it is not explicitly included in
select()
.To include @ingesttimestamp in the results, you can either:
Add @ingesttimestamp to
select()
explicitlyGive the parsed timestamp a different name.
This change makes the timestamp behaviour when using
select()
consistent between queries running on @timestamp and @ingesttimestamp.
Upgrades
Changes that may occur or be required during an upgrade.
Installation and Deployment
Upgraded the Kafka clients to 3.9.1.
New features and improvements
Administration and Management
Enabling AWS Netty client as the default HTTP client for S3 Bucket operations, replacing the existing PekkoHttpClient. The AWS Netty client (based on the Netty project) is the default HTTP client for asynchronous operations in AWS SDK v2. It's possible to fallback to PekkoHttpClient by setting the
S3_NETTY_CLIENT
configuration variable tofalse
and restarting the cluster.This implementation provides additional metrics which can be used to monitor the client connection pool.
s3-aws-bucket-available-concurrency
s3-aws-bucket-leased-concurrency
s3-aws-bucket-max-concurrency
s3-aws-bucket-pending-concurrency-acquires
s3-aws-bucket-concurrency-acquire-duration
More information about each metric is available in the HTTP Metrics section of the AWS documentation page.
On clusters where non-humio thread dumps are available, it's also possible to look into the state of the client thread pool by searching for the thread name prefix
bucketstorage-netty
.By default the client is set with sensible default values coming from the AWS SDK Netty client, but it's possible to tune the client further by setting the following environment variables:
More information about each setting is available at AWS SDK for Java API Reference.
Automation and Triggers
New options are available in the UI for Scheduled searches:
Added the hourly frequency for running scheduled searches. Previously, only daily, weekly, and monthly schedules were available when selecting the
schedule configuration.Scheduled searches now use the
hourly configuration by default instead of cron expression.
For more information, see Schedule.
Scheduled searches can now trigger actions even when no results are found. Previously, actions would only trigger when results were found. This is an optional feature that you can set in Advanced settings.
It is now possible to test Actions with an empty set of events. This feature allows for validating that actions work correctly when no events are found by a scheduled search, and helps prevent action configuration errors.
GraphQL API
<gqlarg>Labels</gqlarg> can now be added to files through the GraphQL mutations: newFile() and updateFile(), and can be queried on the <gqlfield>File</gqlfield> input datatype.
Added the ability to create a saved query from a yaml template via the new createSavedQueryFromTemplate GraphQL mutation.
Added new GraphQL mutation copySavedQuery(). This mutation allows copying a saved query, optionally into another repository.
Configuration
The new configuration option
QUERY_SCHEDULER_QUERY_QUEUE_SIZE
now determines the number of queries that can be enqueued on the query workers while waiting to start running.Introduced a configurable limit on the number of connections that can be attached to a Multi-Cluster View . The default limit is 50, but can be changed through the environment variable
MAX_FEDERATED_CONNECTIONS
.Introduced a configurable limit on the number of tags that can be added to a Multi-Cluster View connection. The default limit is 25, but can be changed through the environment variable
MAX_FEDERATED_CONNECTION_TAGS
.
Ingestion
Added ingest feeds for consuming data from Azure Event Hubs, this feature is now available on cloud and was released for self hosted as of 1.189.0.
For more information, see Ingest Data from Azure Event Hubs.
Custom ingest tokens are now generally available through the API (not in the UI). A minimum length restriction of 16 characters has been added for custom ingest tokens.
For more information, see Custom Tokens.
Dashboards and Widgets
To support the output of the
correlate()
function introduced in this version, theTable
widget has a new format setting Group fields by prefix to display fields from the same event in a single column.Fields that are used for constraints in a query using
correlate()
now show as highlighted in theTable
widget when the Group fields by prefix option is enabled. Hovering a constraint field further highlights all connected fields.
Functions
The new
correlate()
function for advanced event pattern detection is now available. This feature enables users to identify specific sequences of events.Key capabilities:
Search for related event groups and patterns
Define temporal relationships
Configure custom detection criteria
Example use case: Search for a sequence where a user has three failed login attempts followed by a successful login within a five-minute window.
For detailed implementation guidelines and configuration options, please refer to the
correlate()
function documentation.For more information, see
correlate()
.Introduced the new
reverseDns()
query function for performing reverse DNS lookups, intended to replace the oldrdns()
function.Administrators can control the function using the following configuration.
Dynamic configurations:
ReverseDnsDefaultTimeoutInMs
– Default timeout for resolving IPsReverseDnsDefaultLimit
– Default number of unique IPs resolvedReverseDnsMaxLimit
– Maximum allowed number of unique IPs resolvedReverseDnsConcurrentRequests
– Maximum number of concurrent requestsReverseDnsRequestsPerSecond
– Maximum number of requests per second
Configuration variables:
IP_FILTER_RDNS_SERVER
– IP filter for the allowed DNS serversIP_FILTER_RDNS
– IP filter for the allowed IPs that can be resolvedRDNS_DEFAULT_SERVER
– The default DNS server to be used
Fixed in this release
Administration and Management
Fixed an issue in the live-dashboard-query-count metric to improve accuracy.
Fixed incorrect registration of the segment-fetching-trigger-queue-size metric that was producing misleading values.
User Interface
Filtering on the result of an aggregation could lead to more rows in the UI than there should be. This issue has now been fixed.
Fixed an issue where some table columns would not get sorted properly.
Links to the package template schemas documentation in the LogScale UI have been fixed to point to the correct pages instead of the library homepage.
Automation and Triggers
Fixed a rare issue where information about the execution of Filter and Aggregate alerts could fail to be saved, potentially resulting in duplicate alerts.
The Time Selector now correctly retains the timestamp selected in Advanced settings when editing a trigger in the
Search
page. Previously, it would always default to @ingesttimestamp.
Storage
Added disk space verification before downloading IOC files to prevent downloads when disk is full.
Added disk space verification before segment merging to prevent merges when disk is full.
Configuration
Fixed the feature flag implementation to prevent flags from entering temporary wrong states during boot.
Dashboards and Widgets
Widgets now display the
Raw
value format with better precision, as they no longer round/truncate significant digits: instead, raw values now keep the same precision that JavaScript floats can handle. For example, before the fix a chart would display a raw value format like 12345678 as 12,345,700; after the fix, the chart correctly displays the value as 12,345,678.Fixed an issue where clicking a preset interaction, such as
link in theTable
widget to add a field filter to the end of a query, would convert a safe value into an incorrect regex.Fixed a display issue in widgets such as
Single Value
where Small multiples visualizations appeared empty.
Log Collector
Extracted fields, including fields from the Log Collector, could become removable if other fields could also be removed.
This issue resulted in inaccurate usage calculations, as extracted fields' sizes were subtracted from ingestion totals.
Queries
Queries with specific tag and field configurations has been fixed as they could erroneously filter out events. The filtering issue occurred when queries met these conditions:
The query used tag-grouping
The query used field aliasing
The field aliasing rules included a tag-grouped tag
The query filtered results based on a field-aliased field
Example:
A field aliasing rule maps vendor123.bar to baz when
#foo=123
The tag #foo uses tag-grouping
The query filters results based on the baz field
LogScale could not identify joins inside saved queries when
defineTable()
was also used. Becausejoin()
anddefineTable()
functions cannot be used together in the same query, this fix now ensures that joins are no longer hidden within saved queries.Fixed rare cases where stale query cache might have been reused for static queries with time-dependent functions.
Fixed an issue where during digest restart a query might receive duplicate events.
Fixed an issue that caused incorrect worker assignments to a query after handover operations. These incorrect assignments would lead to unnecessary query restarts.
During digest restart, live queries could miss some events in cases where the live query had dependencies, such as dependencies on a lookup file. This issue has now been fixed.
Fleet Management
Fixed a visibility issue where enrolled Log Collector instances that hadn't ingested metrics for over 30 days were not appearing in the fleet overview.
The
Fleet overview
page has been fixed as collectors with errors in log sources would incorrectly show the Okay status instead of ERROR.
Functions
Fixed an issue where the _count field from
fieldstats()
could overflow to a negative value when the function was processing large event volumes.
Other
LogScale shutdown could be delayed if errors occurred during a shutdown already in progress.
Improvement
Installation and Deployment
Updated PDF Render Service dependencies to eliminate vulnerabilities.
User Interface
The legend title can now be enabled and added to the
Time Chart
widget.
Automation and Triggers
For filter and aggregate alerts, values for field-based throttling are now being hashed to save space.
For Self-hosted only: this change enables storing more values for field-based throttling when using throttle fields with large values. See
FILTER_ALERT_MAX_THROTTLE_FIELD_VALUES_STORED
andAGGREGATE_ALERT_MAX_THROTTLE_FIELD_VALUES_STORED
configuration variables.For Self-hosted only: if you need to downgrade after upgrading to this version, you might lose all values stored for field-based throttling, causing alerts with field-based throttling to trigger again although they should have been throttled. This will occur at most once per throttling field value.
Storage
Made improvements to all bucket upload operations. Bucket storage upload operations (uploaded files/global snapshots/segments) now work more efficiently by utilizing the upload queue and callback functions to complete the upload. This ensures that configured concurrency limits are properly enforced.
Reduced memory usage when handling numerical values in internal JSON representation.
Reduced the log level of
OutOfOrderSequenceExceptions
in the ingest pipeline from ERROR to WARN.These exceptions occur either due to data loss in Kafka (requiring Kafka administrator investigation) or, more likely, due to a timeout on message delivery, which will prompt the exception following the timed out message.
The log level for writes to Global Database remains at ERROR, as it will cause the node to crash.
Reduced memory usage when working with large tables (for example, those defined by
defineTable()
).
GraphQL API
Added support for labels in the GraphQL API for Actions. Labels can now be:
Added to Actions through the GraphQL mutations for creating and updating Actions
Queried on the "Action" type
Made the <gqlarg>name</gqlarg> input argument of createDashboardFromTemplateV2() mutation optional. If not supplied, the name will default to the name in the template.
Extended the analyzeQuery() endpoint with an optional time interval. This allows validating the interval for syntax errors.
Queries
Enhanced query handling to prevent execution of queries originating from timed-out HTTP requests.
Increased delays between repeated query restarts of the same static query.
Improved consistency in log message format between
slow query
andquery ended
logs.
Functions
Improved performance of
match(mode=glob)
. It now runs significantly faster in many situations. The performance impact depends on the situation; speed-ups of 4x-90x have been observed.groupBy()
has been improved with optimized results. In some special cases, the function have shown memory allocation reduced by up to 90% and CPU time reduced by over 60%.The
correlate()
function now generates a warning message when used in an unsupported, non-top-level context, such as in subqueries or when passed as an argument to a function.Improved performance of the
sort()
,tail()
,head()
, andtable()
query functions in live queries.Searches using ID filters such as with
in(@id, values=[...])
are now being optimized to run more efficiently. This improvement is especially noticeable when drilling down into results using thecorrelate()
function.