Falcon LogScale 1.219.1 LTS (2026-02-13)
| Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Downgrades To? | Config. Changes? |
|---|---|---|---|---|---|---|---|---|
| 1.219.1 | LTS | 2026-02-13 | Cloud On-Prem | 2027-02-28 | Yes | 1.150.0 | 1.177.0 | No |
Hide file download links
Download
Use docker pull humio/humio-core:1.219.1 to download the latest version
Hide file hashes
These notes include entries from the following previous releases: 1.219.0, 1.218.0, 1.217.0, 1.216.0, 1.215.0, 1.214.0
Bug fixes and updates.
Breaking Changes
The following items create a breaking change in the behavior, response or operation of this release.
Automation and Triggers
LogScale now enforces a limit of 10 actions per trigger (alert or scheduled search). Existing triggers exceeding this limit will continue to run, but must comply with the limit when edited.
Advance Warning
The following items are due to change in a future release.
Security
Starting from LogScale version 1.237, support for insecure
ldapconnections will be removed. Self-Hosted customers using LDAP will only be able to useldapssecure connections.User Interface
From version 1.225.0, LogScale will enforce a new limit of 10 labels that can be added or removed in bulk for assets such as dashboards, actions, alerts and scheduled searches.
Labels will also have a character limit of 60.
Existing assets that violate these newly imposed limits will continue to work until they are updated - users will then be forced to remove or reduce their labels to meet the requirement.
Queries
Due to various upcoming changes to LogScale and the recently introduced regex engine, the following regex features will be removed in version 1.225:
Octal notation
Quantification of unquantifiable constructs
Octal notation is being removed due to logic application difficulties and its tendency to make typographical errors easier to overlook.
Here is an example of a common octal notation issue:
regex/10\.26.\122\.128/In this example,
\122is interpreted as the octal escape forRrather than the intended literal122. Similarly, the.matches not just the punctuation itself but also any single character except for new lines.Any construction of
\xwherexis a number from 1 to 9 will always be interpreted as a backreference to a capture group. If the correponding capture group does not exist, it will be an error.Quantification of unquantifiable constructs is being removed due to lack of appropriate semantic logic, leading to redundancy and errors.
Unquantifiable constructs being removed include:
^(the start of string/start of line)
$(the end of string/end of line)
?=(a positive lookahead)
?!(a negative lookahead)?<= (a positive lookbehind)
<?<!> (a negative lookbehind)
\b(a word boundary)
\B(a non-word boundary)For example, the end-of-text construct
$*only has meaning for a limited number of occurrences. There can never be more than one occurrence of the end of the text at any given position, making elements like$redundant.A common pitfall that causes this warning is when users copy and paste a glob pattern like
*abc*in as a regex, but delimit the regex with start of text and end of text anchors:regex/^*abc*$/The proper configuration should look like this:
regex/abc/For more information, see LogScale Regular Expression Engine V2.
Removed
Items that have been removed as of this release.
Storage
Segment and lookup file bucket storage upload protocols have been improved in preparation for incoming changes. As a result, the metric bucket-storage-request-upload-queue-overflow has been removed, as the underlying logic this metric was measuring no longer exists.
Configuration
Removed the following deprecated configuration variables:
S3_STORAGE_FORCED_COPY_SOURCE
S3_BUCKET_STORAGE_PREFERRED_MEANS_FORCEDUsers previously using
S3_STORAGE_FORCED_COPY_SOURCEshould now useS3_STORAGE_PREFERRED_COPY_SOURCEinstead.Removed
SEGMENT_TO_HOST_MAPPING_CRASH_SETTLING_TIME_SECONDSconfiguration as the logic is now handled internally according to Heartbeats.
Deprecation
Items that have been deprecated and may be removed in a future release.
The Release Note Full Index page page has been deprecated. Please use the Search Release Notes page to search the release notes for any product.
In order to simplify and clean up older documentation and manuals that refer to past versions of LogScale and related products, the following manual versions will be archived after 15th December 2025:
This archiving will improve the efficiency of the site and navigability.
Archived manuals will be available in a download-only format in an archive area of the documentation. Manuals that have been archived will no longer be included in the search, or accessible to view online through the documentation portal.
The following GraphQL mutations have been deprecated:
addAlertLabel
removeAlertLabel
removeScheduledSearchLabelThe deprecated GraphQL mutations will be replaced by the following mutations:
The following GraphQL mutations are being added:
The following GraphQL APIs are deprecated and will be removed in version 1.225 or later:
In the updateSettings mutation, these input arguments are deprecated:
isPackageDocsMessageDismissed
isDarkModeMessageDismissed
isResizableQueryFieldMessageDismissed
On the UserSettings type, these fields are deprecated:
isPackageDocsMessageDismissed
isDarkModeMessageDismissed
Note
The deprecated input arguments will have no effect, and the deprecated fields will always return true until their removal.
The userId parameter for the updateDashboardToken GraphQL mutation has been deprecated and will be removed in version 1.273.
The
EXTRA_KAFKA_CONFIGS_FILEconfiguration variable has been deprecated and planned to be removed no earlier than version 1.225.0. For more information, see RN Issue.
rdns()has been deprecated and will be removed in version 1.249. UsereverseDns()as an alternative function.The Secondary Storage feature is now deprecated and will be removed in LogScale 1.231.0.
The Bucket Storage feature provides superior functionality for storing rarely queried data in cheaper storage while keeping frequently queried data in hot storage (fast and expensive). For more information, see Bucket Storage.
Please contact LogScale support for any concerns about this deprecation.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Installation and Deployment
LogScale has temporarily downgraded its version of Java to v24 due to a potential regression in Java v25, which could affect digest when using zstd compression in Kafka. The downgrade will remain in effect until the issue is resolved, or Java v25 is confirmed benign.
Storage
When a request to LogScale hits a timeout for updating the global database, it will now return HTTP status code 500 instead of status code 400.
LogScale now prevents start-up if a user's Azure endpoint base has not been configured for Azure bucket storage.
If
AZURE_STORAGE_BUCKET,AZURE_STORAGE_ACCOUNTNAME, andAZURE_STORAGE_ACCOUNTKEYvariables are specified andAZURE_STORAGE_ENDPOINT_BASEis not specified, LogScale will fail to start rather than delaying failure until an attempt to connect to the bucket is made.Ingestion
The environment variable
KAFKA_INGEST_QUEUE_SKIP_ON_ERRORmust now be explicitly set to skip messages from the ingest queue. Previously, specific corrupt Kafka records would be automatically skipped, even if the variable was set tofalse.Queries
Filter prefixes have been refactored to change the way they are validated - as a result, the diagnostic message for all prefixes has been changed.
A query prefix may only contain pure filters. Transformations, aggregations etc. are not allowed. Functions are also disallowed, even if their behavior is purely filtering.
Upgrades
Changes that may occur or be required during an upgrade.
User Interface
Upgraded the API explorer to GraphiQL version 5.2.0.
Configuration
LogScale has upgraded its Netty version to 4.2.7.
New features and improvements
User Interface
The following bulk actions can now be performed on multiple assets:
as .zip file
Assets that support this feature include:
Actions
Dashboards
Interactions
Lookup files
Parsers
Triggers
LogScale now also supports enabling and disabling triggers in bulk.
Corresponding GraphQL Batch operations are also available.
For more information, see Table Components.
Documentation
The release note search system has been updated to provide more functionality across a wider range of products. Searching of release notes has been expanded to add support for searching multiple individual products (LogScale, Log Collector, Aux PDF and Humio Operator):
We now have full release notes for each of these products with their own dedicated page and entries.
Improved search speed and filtering
Release note searches can now be saved and shared
With this change, the Full Release Notes Index page has been deprecated as the new search page provides better functionality for searching the release note system. See RN Issue.
Automation and Triggers
Added a new system repository humio-trigger-execution-info, which contains information about the execution of triggers. This new system repository is meant to be consumed by other systems; for a human-readable version, refer to the humio-activity repository.
Currently, this new system repository only contains information about the execution of scheduled searches, not alerts.
A new message template for formatting timestamps is now available for providing more formatting options. It applies to
query_end,query_start, andtriggeredtimestamps. For example:{format_time(triggered, "yyyy-MM-dd'T'HH:mm:ssX")}.For more information, see Message Templates and Variables.
Storage
Enabled new bucket queue implementation by default. It can be disabled via the
NewFileTransferQueuingfeature flag.
API
Added a new parameter
nextRunIntervalto the POSTapi/v1/queryjobsendpoint for query submission. This parameter provides a hint to the query engine about the next run's interval, improving performance through partial result reuse.Example usage:
json{ [...] "nextRunInterval": { "start": 1764765006226 "end": 1764851406227, } }Note
This parameter and its capability is relevant only when users are submitting the same query over and over for different time intervals.
Added a new admin-level API for unsetting a segment's bucketId field. This is for segments that are on disk but not in bucket storage. In cases where a bucket storage has lost data, this API can be used to remove corresponding metadata from LogScale, ending repeated attempts to download the missing files.
Usage requires a POST call to the following endpoint, where bucketField specifies which bucket field to unset (e.g.,"primary" or "secondary"):
/api/v1/dataspaces/${dataspaceId}/datasources/${datasourceId}/segments/${segmentId}/unset-bucket-id?bucketField=${bucketField}Here's an example:
shellcurl https://${clusterUrl}/api/v1/dataspaces/${dataspaceId}/datasources/${datasourceId}/segments/${segmentId}/unset-bucket-id&bucketField=primary -H "Authorization: Bearer ${token}"Added the parameter
queryKindto the GraphQL mutation analyzeQuery, which indicates what kind of query program is being validated/analyzed.Valid values for a standard search query are:
graphql{standardSearch: {} }Valid values for a filter-prefix are:
graphql{ filterPrefix: {} }
Configuration
Added a new dynamic configuration
GraphQLMaxErrorsCount, to configure the maximum number of errors returned in the GraphQL response errors array. Default value is100, with valid values between1and10000.
Dashboards and Widgets
A new styling option in the
Tablewidget now enables to configure custom column labels:Users can now rename column headers directly in the table widget's style configuration panel.
Custom column labels are preserved when switching between columns and refreshing the view.
For more information, see Table Property Reference.
A new styling option in the
Tablewidget now allows users to reorder columns. A reset button is also available for restoring the original columns order of the query result.For more information, see Table Property Reference.
Tablewidgets now support a new Column overflow setting with options to either truncate or wrap text content. Users can now control how to handle long text entries in table columns, improving readability and visual organization of various data and display preferences.The setting is available in the widget style panel under General.
For more information, see Table Widget.
Log Collector
Added new configuration variables:
These variables control whether Log Collector should use the configured HTTP proxy when calling the update server and download endpoint, respectively. The default is to use the proxy, which maintains the same behavior as before this change.
Queries
Added support in the LogScale Regular Expression Engine V2 for hexadecimal escape sequences up to 4 digits in length using the following formats:
\x{n}\x{nn}\x{nnn}\x{nnnn}
Note
Curly brackets are required for this syntax. This is in addition to the existing
\xnnand\unnnnnotations.Added support for repeated backreferences in the LogScale Regular Expression Engine V2 engine. For example, the pattern
regex(.)\1{2,3}can now be used to detect sequences of repeated characters.
Views can now be configured to resolve saved queries, lookup files and field aliases from a different view or repository.
For more information, see ???.
Fleet Management
Added support for optional expiration dates on Log Collector enrollment tokens. Users can now specify when tokens should expire during creation.
Note
The default behavior remains unchanged - tokens have no expiration unless explicitly configured.
Metrics and Monitoring
Added new metrics:
currently-submitted-fetches-for-prefetching - Counts the number of pending segment file fetches the prefetcher has requested from the fetching subsystem.
currently-submitted-fetches-for-archiving - Counts the number of pending segment file fetches the bucket archiving job has requested from the fetching subsystem.
Added new metrics for measuring free slots in the transfer queue:
bucket-storage-transfer-free-slots: Measures the number of available slots for bucket transfers within the limits imposed by environment variables such asS3_STORAGE_CONCURRENCYnode-to-node-transfer-free-slots: Measures the number of available slots for segment downloads within the limit imposed by the environment variableSEGMENTMOVER_EXECUTOR_CORES
Added the metric
currently-submitted-fetches-for-queries, which measures the number of segment downloads the query scheduler is actively waiting to complete.This metric differs from
bucket-storage-fetch-for-query-queuein that the latter counts all fetches the scheduler is planning to do for currently running queries, including those the scheduler has not yet requested.
Auditing and Monitoring
The following audit log types have been removed:
aggregateAlert.add-label
aggregateAlert.remove-label
filterAlert.add-label
filterAlert.remove-label
The following Audit Log types have been added:
saved-query.add-labels
saved-query.remove-labels
aggregateAlert.add-labels
aggregateAlert.remove-labels
filterAlert.add-labels
filterAlert.remove-labels
alert.add-labels
alert.remove-labels
scheduled-search.add-labels
scheduled-search.remove-labels
uploaded-file.add-labels
uploaded-file.remove-labels
action.add-labels
action.remove-labels
dashboard.add-labels
dashboard.remove-labels
Added audit logging to the Export to File functionality for query results.
This adds two new audit log entries:
dataspace.query.export-file: when a query is exported to a file.
dataspace.query.export-bucket: when a query is streamed to an external file bucket (if the
Export to bucketfeature flag is enabled).
All entries include the following data points:
actor - Export requester data
timestamp- Time of the loggingexportedFileName - Exported file name with the file extension chosen
queryId- The ID of the related query audit log found through dataspace.querycsvFieldsExported (optional) - When exporting a query to CSV, you must select specific fields to include.
If the query is streamed due to size, the selected fields are added directly to the query as a filter using
select().When streaming to a bucket, additional fields are added:
bucketProvider - The bucket provider used to stream the file to (for example, S3)
bucket - The bucket ID used to stream the file to
To fetch information regarding audits for exported query requests, you can run a join query like
defineTable()orcorrelate()on the queryId. For example:logscalecorrelate( exports: { type = /dataspace.query.export/ } include: *, queries: { type = "dataspace.query" | queryId <=> exports.queryId } include: [query.queryString, query.ingestStart, query.ingestEnd] )
Fixed in this release
Security
The Service Provider-initiated SAML login protocol has been corrected to route to the default provider instead of the first provider listed.
Installation and Deployment
Fixed an issue in KafkaAdminUtils where a NullPointerException could occur if the code was accessed while a Kafka partition had no leader, causing unnecessary entries in the debug log. This problem has now been fixed.
User Interface
When creating a scheduled report, clicking before the data was loaded could result in no selectable dashboards in the dropdown menu. This issue has now been fixed by disabling the button until the data is loaded.
Storage
Fixed an issue affecting clusters with secondary storage where segment files could not be fetched from other nodes or downloaded from bucket storage directly to secondary storage. This issue only occurred when primary storage was approaching capacity and was introduced in version 1.200.
Fixed a rare issue preventing segments from being merged.
Fixed a bug in the ordering of segment downloads. Downloads for queries now get priority over other downloads.
A few issues have been fixed in idle datasource deletion code. The deletion code could delete the last datasource from a partition, which could cause digest to start from scratch on that partition in Kafka.
Fixed an issue where an InterruptedException could occur from
CurrentHostsSyncJobduring system termination, causing unnecessary entries in the debug log. This problem has now been fixed.An issue found in version 1.218.0 could cause bucket uploads to become stuck. This issue has now been fixed.
Fixed an issue where a scala.MatchError could be thrown from the metrics system during node shutdown, causing unnecessary entries in the debug log. This problem has now been fixed.
Configuration
Error messages that point to instructions to MaxMind configuration contained a wrong documentation URL. The URL has now been updated to the correct location.
Ingestion
Event forwarding rules that reference a saved query will now use the latest version of the saved query if it has been updated.
Log Collector
Fixed several
/api/v1/log-collectorendpoints to return proper status codes for invalid credentials.
Queries
Fixed an issue where the highlighting for query results where regexes with
dorFflags displayed incorrect matches. For example, the regex/.*$/dwould incorrectly highlight the last line of multi-line text instead of the entire text.Note
This issue impacted the display only. It did not affect actual query results.
Fixed an issue where warnings produced when merging worker states, such as
groupBy()function limit breaches, were not consistently attached to a user's query results.
Fleet Management
Adjusted Fleet and Group Management processing to continue applying valid groups when encountering malformed filter queries. Previously, a single group with an invalid filter would prevent all subsequent groups from being processed.
Note
The user interface prevents creation of invalid filters, but filters created before LogScale v1.158.0 may contain malformed queries.
Metrics and Monitoring
Fixed a bug in the
ingest-queue-read-offset-progress-jobthat prevented it from finding the ingest-queue-read-offset metric. This resolves the error message Ingest queue progress error: No ingest-queue-read-offset metrics found for partition that appeared about an hour after cluster restart.
Functions
Fixed an issue related to serialization where queries including
fieldstats()functions orcount()with thedistinctparameter set totruewould sometimes fail, causing the query to be cancelled.
Packages
Fixed an issue where failed package installations or updates could incorrectly produce audit log events, indicating triggers were created or updated.
Known Issues
Storage
For clusters using secondary storage where the primary storage on some nodes in the cluster may be getting filled (i.e. the storage usage on the primary disk is halfway between
PRIMARY_STORAGE_PERCENTAGEandPRIMARY_STORAGE_MAX_FILL_PERCENTAGE), those nodes may fail to transfer segments from other nodes. The failure will be indicated by the error java.nio.file.AtomicMoveNotSupportedException with message "Invalid cross-device link".This does not corrupt data or cause data loss, but will prevent the cluster from being fully healthy, and could also prevent data from reaching adequate replication.
Improvement
Security
Added the
OrganizationOwnedQueriespermission to the default Admin role.Note
Existing user's Admin role selections will not be impacted. Only new instances of the Admin role, created when a new customer organization is created, will get this new permission.
User Interface
Dashboards with query parameters now load faster when displaying large suggestion lists. This improvement prevents dashboard to become unresponsive, which previously occurred when multiple query parameters contained thousands of suggestions.
Documentation
We have enabled a new search system for the main search pages which includes the following features:
Faster and more efficient searching
Defaults to searching only the current manuals covering the latest active releases
Searching of the full document set is available by selecting the checkbox on the search page
Auto-corrections and spelling mistakes are now automatically corrected during the search
Suggestions for alternative search terms (e.g. Virtual Private Network in place of VPN); clicking the links will search for the alternative term
Highlighting of found search terms on pages when you click through to a page; highlights can be removed by clicking the button at the top of the page
Automation and Triggers
Fixed a rare issue where rapidly disabling and re-enabling a scheduled search could cause the next scheduled execution to fail.
The next planned execution time is now preserved when disabling or enabling a scheduled search. It will be updated during the next scheduled search job run after enabling.
Storage
The global snapshot process has been improved to handle uploads one at a time using a dedicated thread. This ensures global snapshot uploads execute as planned and without delay from other uploads in the queue.
Bucket storage prefetch jobs will now download segments from bucket storage to attempt to hit the configured replication factor, even if another node in the cluster already possesses a copy.
AWS' Netty-based HTTP client is now the default for S3 bucket operations. It is also the default client for asynchronous operations in AWS SDK v2.
Users who wish to continue using Apache's Pekko HTTP client can revert by setting
S3_NETTY_CLIENTtoFALSE, then restarting the cluster.This implementation provides the following additional metrics for monitoring the client connection pool:
s3-aws-bucket-available-concurrency
s3-aws-bucket-leased-concurrency
s3-aws-bucket-max-concurrency
s3-aws-bucket-pending-concurrency-acquires
s3-aws-bucket-concurrency-acquire-duration
On clusters where non-humio thread dumps are available, it is also possible to look into the state of the client thread pool by searching for the thread name prefix
bucketstorage-netty.The client is set with default values originating from AWS' SDK Netty client. However, users can fine-tune the client further with the following environment variables:
Improved internal queueing logic for bucket uploads and downloads to adjust the order of transfer when there is contention. Transfer order is now as follows:
Segment uploads
Lookup file uploads
Segment downloads
Configuration
The following environment variables have been renamed to reflect their specific usage:
NUMBER_OF_ROWS_IN_SEGMENT_TO_HOST_MAPPING_TABLEchanged toNUMBER_OF_ROWS_IN_OWNER_HOSTS_TABLESEGMENT_TO_HOST_MAPPING_TOPOLOGY_CHANGE_SETTLING_TIME_SECONDSchanged toOWNER_HOSTS_TABLE_TOPOLOGY_CHANGE_SETTLING_TIME_SECONDS
Ingestion
Improved the handling of digest partitions assignment changes. The digest readers now attempt to update the consumed partitions when possible, instead of restarting on changed assignments.
Queries
Implemented query reuse capability for multi-cluster search worker queries, matching the existing functionality for standard cluster queries.
Filter prefix validation has been strengthened: use of query parameters is now explicitly disallowed.
Improved performance for the LogScale Regular Expression Engine V2 by optimizing concatenated repetitions of similar scope and body, i.e. greedy vs nongreedy repetitions. For example, the regex pattern
.*.*Foowill now be optimized to.*Foo, resulting in significantly improved performance.Added optimization related to tag filters. This improvement should slightly speed up
correlate()queries containing tag filters.Improved caching of query states to allow partial reuse of query results when querying by ingest time.
Metrics and Monitoring
Added two new metrics:
cluster-static-query-total-search-cost
cluster-static-query-reused-search-cost
These metrics record the total cost of search and cost of reused parts for queries coordinated on a node.
Packages
Improved error messages for package assets violating the latest package schema to better identify which asset specifically is causing validation errors. Error messages now contain the name and type of the offending asset.