Falcon LogScale 1.171.2 LTS (2025-03-19)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Downgrades To? | Config. Changes? |
---|---|---|---|---|---|---|---|---|
1.171.2 | LTS | 2025-03-19 | Cloud On-Prem | 2026-02-28 | Yes | 1.150.0 | 1.165.1 | No |
Download
Use docker pull humio/humio-core:1.171.2 to download the latest version
These notes include entries from the following previous releases: 1.171.1
Bug fixes and updates.
Breaking Changes
The following items create a breaking change in the behavior, response or operation of this release.
Storage
There is a change to the archiving logic so that LogScale no longer splits a given segment into multiple bucket objects based on ungrouped tag combinations in the segment. Tag groups were introduced to limit the number of datasources if a given tag had too many different values. But the current implementation of archiving splits the different tag combinations contained in a given segment back out into one bucket per tag combination, which is a scalability issue, and can also affect mini-segment merging. The new approach just uploads into one object per segment. As a visible impact for the user, there will be fewer objects in the archiving bucket, and the naming schema for the objects will change to not include the tags that were grouped into the tag groups that the datasource is based on. The set of events in the bucket will remain the same. This is a cluster risk, so the change is released immediately.
For self-hosted customers: if you need time to change the external systems that read from the archive due to the naming changes, you may disable the
DontSplitSegmentsForArchiving
feature flag (see Enabling & Disabling Feature Flags).For more information, see Tag Grouping.
GraphQL API
The new parameter
strict
has been added to the input of analyzeQuery() GraphQL query. When set to default valuetrue
, query validation will always validate uses of saved query and query parameter. When set tofalse
, it will attempt to ignore validation of saved query and query parameter uses. This is a breaking change because previously, validation would behave as ifstrict
was set tofalse
. To achieve legacy behavior, setstrict=false
.
Deprecation
Items that have been deprecated and may be removed in a future release.
The
color
field on theRole
type has been marked as deprecated (will be removed in version 1.195).The
lastScheduledSearch
field from theScheduledSearch
datatype is now deprecated and planned for removal in LogScale version 1.202. The newlastExecuted
andlastTriggered
fields have been added to theScheduledSearch
datatype to replacelastScheduledSearch
.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Storage
Relocation of datasources after a partition count change will now be restarted if the Kafka partition count changes again while the cluster is executing relocations. This ensures datasource placement always reflects the latest partition count.
Upgrades
Changes that may occur or be required during an upgrade.
Installation and Deployment
The JDK included in container deployments has been upgraded to 23.0.2.
Once LogScale has been upgraded to 1.162.0 with the
WriteNewSegmentFileFormat
feature flag enabled, LogScale cannot be downgraded to a version lower than 1.157.0.The minimum supported version that LogScale can be upgraded from has increased from 1.112 to 1.136. This change allows for removal of some obsolete data from LogScale database.
The Kafka client has been upgraded to 3.9.0.
Other
New features and improvements
Security
Users granted with the
ReadAccess
permission on the repository can now read files in read-only mode.A new default role named Reader is now visible in the UI. The role only grants the
ReadAccess
permission. Unlike the existing default roles, the Reader role is not editable and cannot be deleted.
Installation and Deployment
Added support for communicating between PDF Render Service and LogScale using a HTTP client rather than requiring HTTPS.
Administration and Management
Metrics made available on the Prometheus HTTP API have been modified so that the internal metrics that represent "meters" no longer become type=COUNTER in Prometheus, but instead are type=SUMMARY. The suffix on the name changes from
_total
to_count
due to this. This adds reporting if 1, 5 and 15 minute rates.Usage
is now logged to the humio repository.
User Interface
The adhoc-table content preview in the UI is now limited to 500 rows.
You can now hide the event distribution histogram to get even more space for looking at your data. This new button is located in the toolbar above the Results tab in the
Search
interface.For more information, see Display Results.
In the Inspection panel, case-insensitive search is now allowed when searching for field names. For example,
repo
andRepo
will now match repo if this field is present.
Automation and Alerts
Updated the wording on a number of error and warning messages shown in the UI for alerts and scheduled searches.
Storage
Cluster statistics such as compressed byte size and compressed file of merged subset only count
aux
files at most once. Previously, the statistic counted every localaux
file in the cluster, which would increase with the replication factor, but that sum ofaux
file sizes was added to a sum of segment file sizes which did not consider the replication factor.From the user point of view, this change does not affect the ingest accounting and measurements, but it does affect the following other items:
The semantics of the
compressedByteSize
,compressedByteSizeOfMerged
anddataVolumeCompressed
fields in theClusterStatsType
,RepositoryType
andOrganizationStats
graphql types are changed: now file sizes of both segments andaux
files are only counted once.These values are shown for example on the front-page, and will be smaller than the old values.
Retention by compressed file size will keep more segments, since we delete segments to keep under the actual limit, which is calculated as the configured limit minus the
aux
file sizes.
For more information, see Cluster statistics.
The frequency of Kafka deletions has been reduced from once per minute to once per 10 minutes with the aim of reducing the load on global. As a consequence of this change, Kafka will retain slightly more data.
GraphQL API
The analyzeQuery() GraphQL query now supports rejecting functions. This is done using the
rejectFunctions
input parameter, which takes a list of function names.Adding a new
@stability
directive to the GraphQL API:The
@stability
directive has been added on all non-deprecated output fields.The
@stability
directive has a level argument with three possible enum values: Preview, ShortTerm and LongTerm. A field can now either have the@deprecated
or the@stability
directive. The level Preview corresponds to the old@preview
directive (which has been removed), the level ShortTerm corresponds to the previous stability promise of at least 12 weeks. The level LongTerm means that the field is kept stable for at least 1 year.Input fields without the
@stability
directive "inherit" the stability level from the query or mutation that the input type is used for, enum values without the directive "inherit" the stability level from the field that returns the enum type.Some fields that were previously written as being in preview, but without the
@preview
directive, are now marked properly as in preview (@stability
directive with level Preview).Usage of fields or enum values in Preview when calling the GraphQL endpoint is still shown in the extensions part of the response, but the format has changed.
For all existing deprecated fields that were deemed to have had LongTerm stability, the version to be removed in has been updated to reflect a 1-year deprecation period.
API
filterQuery
in API QuerymetaData
now searches using the same timestamp field as the original query — the one set in the UI in the Time field selection. For example, it returnsuseIngestTime=true
if the original query used the @ingesttimestamp field.
Configuration
Clusters using an HTTP proxy can now choose to have calls to the token endpoint for Google, Bitbucket, Github and Auth0 providers go through this proxy. This is configured by using the following new configuration values:
The default value for all of these is
false
, so there is no change to how existing clusters are configured to use Google, Bitbucket, Github or Auth0.Two new metrics,
global-reader-occupancy
andchatter-reader-occupancy
, have been added to measure occupancy of the global-events loop and transientChatter-events loop.Additionally, global now also starts logging errors if the roundtrips take more than 10 seconds while the occupancy of the consumer part is below 90%. This includes a small update to the metric
global-publish-wait-for-value
to measure time spent publishing the message to Kafka as well.
Ingestion
Clicking
on the parser editor page now produces events that are more similar to what an ingested event would look like in certain edge cases.The error preview for test cases on the Parsers page now shows if there are additional errors.
You can now validate whether your parser complies to the CrowdStrike Parsing Standard (CPS) 1.0 by clicking the checkbox in the parser editor.
For more information, see Normalize and Validate Against CPS Schema.
Dashboards and Widgets
Sections in the Styling panel for all widgets are now collapsible.
The
Table
widget cells will now show a warning along with the original value if decimal places are configured to be below 0 or above 20.
Queries
Added resultPipelineExecutionCount field to the following logs from the QuerySessions class, starting with:
live part of live query ended:
static part of live query ended:
static query ended:
poll of live query:
This field captures how many times the result calculation pipeline has run for a given query, with the following remarks:
Join queries only count the main query, since execution counts for subqueries are logged separately.
Repeating queries sum up the execution counts for the individual queries to mimic the behavior of a single live query.
Make searching for
@id=X
efficient when there is exactly one such top level filter in the query andX
is an actual event ID in the LogScale cluster, by automatically restricting the time span of the search to the 1 second interval designated by a substring ofX
. To further improve efficiency, include the proper tag filters in the search.
Functions
When the @timestamp field is used in
collect()
, a warning has been added because collecting @timestamp will usually not return any results unless there's only one unique timestamp or thelimit
parameter has been given an argument of1
. A work-around is to rename or create a new field with the value of timestamp and collect that field instead, for example:logscaletimestamp := @timestamp | collect(timestamp)
The
wildcard()
function has an additional parameter:includeEverythingOnAsterisk
. When this parameter is set totrue
, andpattern
is set to*
, the function will also match events that are missing the field specified in thefield
parameter.For more information, see
wildcard()
.Introducing a new query function
array:dedup()
for deduplicating elements of an array.For more information, see
array:dedup()
.The new query function
setTimeInterval()
is now available. This function overwrites the time interval otherwise set in the UI/API. Example usage:logscalesetTimeInterval(start=7d, end=12h, timezone="Europe/Copenhagen")
For more information, see
setTimeInterval()
.
Other
Added
organization
to logs from building parsers.When logging organizations, the name is now logged with key
organizationName
instead ofname
.The new metric
globalsnapshot-pct-of-max-heap
has been added. It reports the size of the recentglobal-snapshot.json
file written as percentage of maximum heap size.If feature flag
WriteNewSegmentFileFormat
is enabled via built-in mechanisms, then raise the minimum version in global to 1.157.0 so that any potential roll back does not go to a version that cannot properly handle the feature being on-then-off; builds before 1.157.0 do not properly handle the feature being off if it has been on before.
Fixed in this release
User Interface
The layout of the
Table
widget has been fixed due to a a vertical scroll bar that was appearing inside the table even when rows took up minimum space. This would lead to users having to scroll in the table to see the last row.The Events tab in
Search
results would generate an error when using @ingesttimestamp in the Time field selection. This issue has now been fixed.The dialog for creating a new group did not close automatically after successfully creating a group. This issue has been fixed.
The Saved query dialog has been fixed so that the saved queries are now sorted.
The Filter Match Highlighting feature could be deactivated for some regular expression results due to a stack overflow issue in the JavaScript Regular Expression engine. This issue has been fixed and the highlighting now works as expected.
Large license limits would overflow in the UI, resulting in wrong limits being shown. This issue has been fixed.
Automation and Alerts
Listing actions on a trigger referencing a non-existing action would fail. This issue has been fixed.
Storage
In rare cases, the internal accounting of segment files used by queries and related metrics could be incorrect, which could lead to starved searches. This issue has been fixed.
Fixed a crash that could occur on boot if global contains dataspaces marked for deletion.
A fix has been made to prevent leaking empty datasource directories, by announcing in global that they are deleted some time before they are actually deleted from global.
An issue has been fixed which could in rare cases cause data loss of recently digested events due to improper cache invalidation of the digester state.
Made adjustments to handling of in-memory local datasource state, which should help ensure the local state is in sync with global.
GraphQL API
Instead of failing silently, GraphQL gives an error in the following two scenarios:
Disabling feature flags on an organization if the feature is enabled globally.
Disabling feature flags for a user if the feature is enabled globally or for the user's organization.
API
filterQuery
in API QuerymetaData
was incorrect when using filters with implicitAND
after aggregators. For example,groupBy(x) | y=* z=*
would incorrectly givey=* z=*
for thefilterQuery
, whereas*
is the correctfilterQuery
. This issue has existed since 1.160.0 and it has now been fixed. You can work around the issue by explicitly adding|
between filters.
Configuration
The dynamic configuration
lookup-table-sync-await-seconds
has been fixed as it would require a restart to take effect.
Ingestion
The changes to parser's test that enabled the parser code page to produce events that are more similar to an ingested event, have been reverted due to unspecified errors for some users.
Dashboards and Widgets
Errors were occurring in dashboard queries when dashboard filters contained parameters that were only used within the filter itself and nowhere else in the query. This issue has now been fixed.
In the
Time Chart
widget, the Step after interpolation method would not display the line or area correctly when used with the Show gaps method for handling missing values.The usage of filter for dashboards has been fixed:
In the
Time Chart
widget, an issue has been fixed where values below the minimum value of a Logarithmic axis would not be displayed, but values below 0 would.Value and label of the
Gauge
widget could overflow. This issue has been fixed.Fix an issue where the event distribution chart would be hidden by default if a repository was configured with automatic search disabled.
Log Collector
When computing group memberships in fleet management, a query timeout could result in collectors loosing their group memberships. This issue has now been fixed.
Queries
The
Query stats
panel on the Organization Query Monitor was reporting misleading information about total number of running queries, total number of live queries etc. when there were more than 1,000 queries that matched the searched term. This has been fixed by changing the global part of the result of the runningQueries() graphql query, although the list of specific queries used to populate the table on the page is still capped at 1,000.An error in the query execution could lead to a query that would not progress and not stop, and would appear to hang indefinitely. This could happen when hosts were removed from the cluster. This issue has now been fixed.
Some queries (especially live queries) would continuously send a warning about missing data. This could happen if the query was planned at a time when there were cluster topology changes. This issue has been fixed and, instead of sending the warning, the query will now automatically restart since there might be more data to search.
The query table endpoint client has been fixed as it was unable to receive the response for tables larger than 128 MB, and an error occurred.
A performance regression in the query scheduler has been fixed as it could lead to query starvation and slow searches.
A misalignment issue between primary and subquery relative intervals has been fixed. Previously, a subquery's relative time interval did not align correctly with the primary query interval. This misalignment could cause slight differences in the relative
now
reference point between the primary query and subquery.An issue has been fixed in the deserialization of queries, which prevented some queries from being handed over to another node in the cluster.
Queries could sometimes fail and return an
IndexOutOfBoundsException
error. This issue has been fixed.
Functions
Matching on multiple rows in
glob
mode missed some matching rows. This happened in cases where there were rows with differentglob
patterns matching on the same event. For example, using a fileexample.csv
:csvcolumn1, column2 ab*, one a*, two a*, three
And the query:
logscalematch(example.csv, field=column1, mode=glob, nrows=3)
An event with the field column1=abc would only match on the last two rows. This issue has been fixed so that all three rows will match on the event.
objectArray:eval()
has been fixed as it did not work on array names containing an array index, for exampleobjectArray:eval(array="myArray[0].foo[]", ...)
.Fixed an issue where
parseCEF()
would stop a parser or query upon encountering invalid key-value pairs in the CEF extensions field. For example, in:Jun 09 02:26:06 zscaler-nss CEF:0||||||| xx==
since the CEF specification dictates that
=
must be escaped if it is meant as a value, the second=
would trigger the issue as it is no longer a valid key-value.If such an error is encountered, the event is left unparsed and a parser error field will be added.
The
array:dedup()
function has been fixed as it would not write the output array if there were no duplicate elements in the input array, and the output array was different from the input array.The
defineTable()
function in Ad-hoc tables has been fixed as it incorrectly used UTC time zone for query start and end timestamps, regardless of the primary query's time zone. This issue only affected queries where the primary query used a non-UTC time zone, and either of the following:the primary query's time interval used calendar-based presets (like
calendar:2d
, ornow@week
), or:the sub-query used any query function that uses the timezone, for example
timeChart()
,bucket()
, and anytime:*
function.
The
defineTable()
function in Ad-hoc tables has been fixed as it did not use the ingest timestamp for time range specification provided by the primary query, using the event timestamp instead. This issue only affected queries where the primary query used ingest timestamps.
Other
The type for deprecated package schema fields has been renamed from
valid
tonull
.Feature flags were marked experimental even if they were in rollout. This issue has been fixed so that the actual non-experimental features in the cluster are now correctly displayed in the side bar in the Organization overview page.
Improvement
User Interface
The Search Link dashboard interaction now allows you to specify the target view/repository as . This setting allows for exporting and importing the dashboard in another view, while allowing the Search Link interaction to execute in the same view as the dashboard was imported to. is now the first suggested option in the drop-down list in Dashboard Link or Search Link interaction types.
Storage
Improved performance of replicating IOC files to allow faster replication.
Improved performance when syncing IOCs internally within nodes in a cluster.
Improved the performance of ingest queue message handling that immediately follows a change in the Kafka partition count. Without this improvement, changing the partition count could substantially slow down processing of events ingested before the repartitioning.
Relocation of datasources after a partition count change will now be restarted if the Kafka partition count changes again while the cluster is executing relocations. This ensures that datasource placement always reflects the latest partition count.
Queries
In cases where a streaming query is unable to start — for example, if it refers to a file that does not exist — an error message is now returned instead of an empty string.
Functions
Improving the error message for missing time zones in the
parseTimestamp()
function.