Falcon LogScale 1.213.1 LTS (2025-11-26)
| Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Downgrades To? | Config. Changes? |
|---|---|---|---|---|---|---|---|---|
| 1.213.1 | LTS | 2025-11-26 | Cloud On-Prem | 2026-11-30 | Yes | 1.150.0 | 1.177.0 | No |
Hide file download links
Download
Use docker pull humio/humio-core:1.213.1 to download the latest version
Hide file hashes
These notes include entries from the following previous releases: 1.213.0, 1.212.0, 1.211.0, 1.210.0, 1.209.0, 1.208.0
Bug fixes and updates.
Breaking Changes
The following items create a breaking change in the behavior, response or operation of this release.
GraphQL API
The dashboard field in the ScheduledReport GraphQL type is now optional. When users lack dashboard access permissions, the field will return a
nullresult instead of causing a request failure.Note
Users should update their queries and type definitions to account for the optional nature of this field and that a
nullvalue exists.Functions
Renamed parameter
caseInsensitiveintext:editDistance()andtext:editDistanceAsArray()toignoreCasefor consistency with other functions.
Advance Warning
The following items are due to change in a future release.
Configuration
Cached data files mode, which allows users to configure a local cache directory for segment files, has been deprecated and will be removed in version 1.225.0. This configuration is no longer recommended, as using a local drive with bucket storage generally provides better performance.
The associated configuration variables have also been deprecated and are planned for removal in version 1.225.0:
CACHE_STORAGE_DIRECTORY
CACHE_STORAGE_PERCENTAGE
CACHE_STORAGE_SOURCE
Removed
Items that have been removed as of this release.
GraphQL API
Removed deprecated GraphQL elements:
Mutations:
addStarToAlertV2
removeStarFromAlertV2
addStarToScheduledSearch
removeStarFromScheduledSearch
Fields:
Alert.isStarred
ScheduledSearch.isStarred
UserSettings.starredAlerts
The GraphQL enum value GraphQlDirectivesAmountLimit from enum DynamicConfig has also been removed.
Metrics and Monitoring
Removed the deprecated metric datasource-count, which was responsible for continuously reporting the number of datasources per repository.
Repository datasource information is still available in the following ways:
When new datasources are created and deleted, that information is available to users via datasource logs.
Users can also obtain the datasource count using the query
GET api/v1/repositories/$DATASPACEto view a current list of datasources for a given repository.For more information, see Repository and View Settings, Datasources, Ingestion: Ingest Phase.
Deprecation
Items that have been deprecated and may be removed in a future release.
The updateUploadFileAction() GraphQL mutation is deprecated. Use instead updateUploadFileActionV2().
The
EXTRA_KAFKA_CONFIGS_FILEconfiguration variable has been deprecated and planned to be removed no earlier than version 1.225.0. For more information, see RN Issue.
rdns()has been deprecated and will be removed in version 1.249. UsereverseDns()as an alternative function.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
User Interface
Updated the and dropdown menu items to direct users to the CrowdStrike Support Portal.
GraphQL API
GraphQL mutations used for updating actions will now preserve existing label values when the labels argument is omitted. Users who want to remove labels from an action will need to specifically assign the labels argument to an empty list, by entering a pair of brackets with nothing between them (i.e., labels: []).
Dashboards and Widgets
Removed the support email link (logscalesupport@crowdstrike.com) from scheduled report email footers.
Queries
Made changes to
correlate()internals that are not backwards compatible. Clusters with mixed new and old LogScale versions will not be able to runcorrelate()queries until all nodes are upgraded. This limitation also applies to Multi-Cluster Search queries across clusters running different versions.Metrics and Monitoring
The internal monitoring jobs that used to query the internal humio repository for metrics now query the humio-metrics repository instead.
To support this, the default value of
SEARCH_PIPELINE_MONITOR_QUERYhas been changed to#kind=logs | count()for clusters without metrics in the LogScale repository.Functions
The following function restrictions are now compile-time errors instead of runtime errors, making them detectable by GraphQL APIs and Language Service Protocol (LSP):
eval()now includes coverage for invalid usage within expressions
groupBy()now includes coverage for limiting parameter values exceeding maximum allowed value
series()includes coverage for collection parameters containing prohibited fields
regex(),replace(), andarray:regex()includes coverage for their use of Regular Expression Engine v2 when disabled at cluster level.
Changed liveness restrictions for
selfJoin()andselfJoinFilter()functions to be enforced at compile time instead of runtime, enabling detection by the Language Service Protocol (LSP) and GraphQL validation endpoints.Changed top-level restrictions for join-like query functions to be enforced at compile time instead of runtime, enabling detection by the Language Service Protocol (LSP) and GraphQL validation endpoints.
Upgrades
Changes that may occur or be required during an upgrade.
Installation and Deployment
Upgraded LogScale's Zstandard (ZSTD) compression library from version 1.5.6 to 1.5.7.
Upgraded the bundled Java Development Kit (JDK) to Java 25.0.1.
For this upgrade, users should be aware that systems configured with Transparent Huge Pages (THP) mode as
madvise, the option-XX:+UseTransparentHugePagesdoes not enable huge pages when running with the default garbage collecter G1. To address this, the following workaround is available:java# echo always > /sys/kernel/mm/transparent_hugepage/enabled
New features and improvements
Security
Added new environment variable
SAML_METADATA_ENDPOINT_URL, allowing users to specify where LogScale will fetch the IdP signing certificate. This provides an alternative to usingSAML_IDP_CERTIFICATEandSAML_ALTERNATIVE_IDP_CERTIFICATE, and enables easier certificate management without having to restart LogScale with a new set of variables.The existing certificate configuration options remain available, and when both methods are specified, certificates from both sources will be used.
GraphQL API
Enhanced the GraphQL entities search API to include scheduled reports as searchable assets. The entitiesSearch, entitiesPage, and entitiesLabels query endpoints now support scheduled reports with full metadata access and standard filtering capabilities.
This change extends the entitiesSearch, entitiesPage, and entitiesLabels query endpoints to:
Return scheduled reports as part of search results when filtering by entity types
Provide full access to scheduled report metadata through the
ScheduledReportResultdatatypeSupport the same filtering and pagination capabilities available for other asset types
Maintain proper view-level access controls for scheduled report visibility
Storage
Move bucket storage actions (for example, writing data to disk after bucket download, encryption/decryption when applicable) to a dedicated threadpool. This should result in less blocking on the threadpool responsible for handling HTTP requests (which could lead nodes to becoming unresponsive).
Added support for archiving ingested logs to Azure Storage. Logs that are archived using Azure Storage are available for further processing in any external system that integrates with Azure.
Users can configure Azure Storage archiving options using the following optional settings in the Egress repository:
Bucket (required) – destination bucket for archived logs
Format – choose between NDJSON or Raw formatting for the stored file (default: NDJSON)
Archiving start – select between archiving all segments or only those starting after a specified UTC timestamp
For more information, see Azure Archiving.
API
Extended a user's ability to control lookup file management with the creation of two REST API endpoints,
filefromqueryandfileoperation. Also extended the existing REST API endpointfileto support PATCH operations, and provide the ability for users to update existing files. Previously, users could only replace them in their entirety.The endpoint
filefromquerywill provide the following functionality:Support for creating and updating lookup files directly from the dropdown menu in the search results by clicking , see Create a lookup file in the Search interface for more information.
Support for updating lookup files via extensions to an existing file's REST API.
The endpoint
fileoperationwill provide the following functionality:Allows users to view the progress of operations started on other endpoints.
Updates the state of PATCH operations on the
filesendpoint.
For more information, see Lookup API.
Added the parameter
dataspaceIdto the Missing Segments API to allow deletion of all missing segments in a specific dataspace.
Dashboards and Widgets
Added a default Series color palette option for dashboards. This new palette can be configured at dashboard level and can be inherited by those widgets that support multiple color palettes for differentiating between series.
Added new styling option to adjust the size of axis and legend titles on
Time Chart,Pie Chart,Bar Chart,Scatter Chart, andHeat Mapwidgets.A new Sorting styling option is now available for the
Bar Chartand theHeat Mapwidgets, allowing for ordering the x and y axes with different methods.For more information, see Bar Chart Property Reference, Heat Map Property Reference.
Metrics and Monitoring
Added the field window_count to
Timermetrics. It tracks the number of measurements in the given window, usually 60 seconds.
Functions
Added query function
matchAsArray(), which matches multiple rows from a CSV or JSON file and adds them as object array fields. This is similar to thematch()function but with the following key differences:Only supports ExactMatch mode
Adds multiple matches as structured arrays instead of creating separate events
Allows customization of the array name using the
asArrayparameter
The length of the structured arrays is limited by the
nrowsparameter. If the number of matches is larger thannrows, then the last matchingnrowsare put in the structured array. This is similar to how thematch()function deals with matches larger thannrows.For more information, see
matchAsArray().The Upload file action has now been renamed to Lookup file action and improved with new upload functionalities:
Overwrite– Replaces entire file contents of existing file (existing behavior)Append– Adds new information to the end of existing fileUpdate– Updates specific rows based on selected key columns.
Note
The existing behavior for the
Lookup Fileaction isOverwrite, which replaces the entire contents of existing CSV files.For more information, see Action Type: Lookup File, Lookup Files.
Added two new functions for calculating edit (Levenshtein) distances:
text:editDistance()– returns the edit distance between target and reference strings, capped atmaxDistancetext:editDistanceAsArray()– returns an object array containing edit distances between a target string and multiple reference strings
Fixed in this release
Storage
Fixed an issue where multiple nodes would concurrently attempt to execute the same merges of mini-segments, creating waste. Future merges will now use one node consistently.
Fixed an issue causing unbounded creation of global snapshots in temporary directories during periods of poor bucket storage performance.
API
A file's HTTP PATCH endpoint could get stuck while reading new data by imposing size restrictions and ensuring the stream is read properly using Pekko sinks. This issue has now been fixed.
Dashboards and Widgets
Added support for referencing parsers within queries, allowing parsers to be included and referenced from other parsers. The new format supports new macros for
$parser://and$query://.For more information, see Referencing Resources.
Shared dashboards containing widgets using anchored time points (for example:
calendar: 1w@wfor last week) would fail authorization and fail to display dashboard data. This issue has now been fixed.
Queries
Fixed an issue where anchored time points would cause import/export of dashboards and saved queries to fail. New schema versions for dashboards and saved queries (0.23.0 and 0.60 respectively) will now allow advanced time interval syntax.
For more information, see Anchored Time Points - Syntax.
Fixed an issue where queries using the
correlate()function within a federated search could experience a memory leak.Fixed an issue where the internal polling frequency of subqueries could result in slower result display.
Metrics and Monitoring
Fixed two issues with metrics:
Ingest queue offset metrics are now properly cleaned up when the job switches nodes, preventing stale metric reporting.
Falcon Data Replicator (FDR) queue metrics can now be re-registered after being unregistered, supporting re-enabled FDR feeds.
Affected metrics:
ingest-consumer-group-offset
ingest-consumer-group-offset-lag
ingest-offset-lowest
ingest-queue-lowest-offset-lag
fdr-message-count
fdr-inflight-message-count
For more information, see Ingesting FDR Data into a Repository.
Fixed an issue where the progress report for the metric ingest-queue-read-offset would erroneously log errors stating Ingest queue progress error approximately 90 minutes after cluster restart.
Functions
Fixed an issue where the
parseXml()function would output arrays incompatible with array functions due to the lack of a0element. Backward compatibility with existing queries is maintained by keeping the first element in the non-array field.For more information, see
parseXml().The
parseTimestamp()function would cause an internal server error when used outside parsers and given format strings with insufficient date information. This issue has now been fixed.The serialization protocol in the
defineTable()function caused query failure. This issue has now been fixed.
Other
Fixed LDAP authentication bug.
Fixed an issue where the process to delete messages from the ingest queue would sometimes trigger the error Skipping Kafka event deletion for this round since stripping topOffsets failed during the calculation phase without cause.
Known Issues
Storage
For clusters using secondary storage where the primary storage on some nodes in the cluster may be getting filled (i.e. the storage usage on the primary disk is halfway between
PRIMARY_STORAGE_PERCENTAGEandPRIMARY_STORAGE_MAX_FILL_PERCENTAGE), those nodes may fail to transfer segments from other nodes. The failure will be indicated by the error java.nio.file.AtomicMoveNotSupportedException with message "Invalid cross-device link".This does not corrupt data or cause data loss, but will prevent the cluster from being fully healthy, and could also prevent data from reaching adequate replication.
Improvement
Administration and Management
Re-introduced audit logging when overriding an existing Lookup file with identical content.
User Interface
Updated the series formatting color picker for widgets and dashboards to support color selection from predefined color palettes.
Enhanced Lookup files and Interactions asset types in the
Resourcespage, as follows.Lookup filestable component improvements:Added table sorting
Implemented proper pagination
Added package column filtering
Updated package column to show versionless package string instead of with version
Interactionstable component improvements:Added sort functionality
Implemented proper pagination
Added column filters for package and interaction type
Fixed the Language Server Protocol (LSP) features in the panel so the query editor for editing Search link interactions has LSP features (syntax highlighting, docs, suggestions, etc.)
Ingestion
Added error logging for ingest queue progression issues. When the read offset metric for any ingest queue partition doesn't progress, logs will display an error message stating Ingest queue progress error: before providing the log data.
The criteria for an error message being provided are:
Ingest queue doesn't progress over a 10-minute period
Ingest queue shows no activity for over an hour
Note
LogScale clusters regularly send internal messages on every ingest partition. If the metric does not increase, there is an issue with the digester.
Queries
Added user-visible warnings to alert users when query polling fails repeatedly.
Query cost/work calculation no longer includes time the query spends waiting for work.
For more information, see Organization Query Monitor — Query Stats.
Digest nodes now measure wall-clock time instead of CPU time when updating live queries with events, improving performance and reducing CPU usage.
Note
This improvement may introduce slight variations in live cost measurements due to thread scheduling.
Improved live query handling during high ingest latency. LogScale now avoids halting live queries when latency is not caused by digest node overload.
To control this behavior, users can apply the environment variable
LIVEQUERY_CANCEL_TRIGGER_INGEST_OCCUPANCY_LIMIT. This variable provides the amount of time spent waiting for events to be stored in segments and written to live queries compared to obtaining data from Kafka with a percentage value.Note
Setting the default value to
-1disables the logic.Warning
The maximum environment variable value is 100. If set to this value, live queries will not be stopped due to ingest delay.
Metrics and Monitoring
Added new metrics for live query execution monitoring:
total-live-events – provides an aggregate count of live events across all dataspaces
worker-live-queries – provides the number of live queries currently running on the worker node
worker-live-dataspace-queries – provides the total number of repository queries currently executing on the worker node
Functions
Improved
correlate()graph analysis performance. Users may notice changes to the query graph visualization.For more information, see Correlation Options, Display tabs.
Improved error handling resiliency for multi-pass functions like
correlate()by creating an automatic stop for queries that would previously stall indefinitely. Future queries that stall will be stopped automatically.