Falcon LogScale 1.219.1 LTS (2026-02-13)

Version?Type?Release Date?Availability?End of SupportSecurity UpdatesUpgrades From?Downgrades To?Config. Changes?
1.219.1LTS2026-02-13

Cloud

On-Prem

2027-02-28Yes1.150.01.177.0No

Hide file download links

Show file download links

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.219.0, 1.218.0, 1.217.0, 1.216.0, 1.215.0, 1.214.0

Bug fixes and updates.

Breaking Changes

The following items create a breaking change in the behavior, response or operation of this release.

  • Automation and Triggers

    • LogScale now enforces a limit of 10 actions per trigger (alert or scheduled search). Existing triggers exceeding this limit will continue to run, but must comply with the limit when edited.

Advance Warning

The following items are due to change in a future release.

  • Security

    • Starting from LogScale version 1.237, support for insecure ldap connections will be removed. Self-Hosted customers using LDAP will only be able to use ldaps secure connections.

  • User Interface

    • From version 1.225.0, LogScale will enforce a new limit of 10 labels that can be added or removed in bulk for assets such as dashboards, actions, alerts and scheduled searches.

      Labels will also have a character limit of 60.

      Existing assets that violate these newly imposed limits will continue to work until they are updated - users will then be forced to remove or reduce their labels to meet the requirement.

  • Queries

    • Due to various upcoming changes to LogScale and the recently introduced regex engine, the following regex features will be removed in version 1.225:

      • Octal notation

      • Quantification of unquantifiable constructs

      Octal notation is being removed due to logic application difficulties and its tendency to make typographical errors easier to overlook.

      Here is an example of a common octal notation issue:

      regex
      /10\.26.\122\.128/

      In this example, \122 is interpreted as the octal escape for R rather than the intended literal 122. Similarly, the . matches not just the punctuation itself but also any single character except for new lines.

      Any construction of \x where x is a number from 1 to 9 will always be interpreted as a backreference to a capture group. If the correponding capture group does not exist, it will be an error.

      Quantification of unquantifiable constructs is being removed due to lack of appropriate semantic logic, leading to redundancy and errors.

      Unquantifiable constructs being removed include:

      • ^ (the start of string/start of line)

      • $ (the end of string/end of line)

      • ?= (a positive lookahead)

      • ?! (a negative lookahead)

      • ?<= (a positive lookbehind)

      • <?<!> (a negative lookbehind)

      • \b (a word boundary)

      • \B (a non-word boundary)

      For example, the end-of-text construct $* only has meaning for a limited number of occurrences. There can never be more than one occurrence of the end of the text at any given position, making elements like $ redundant.

      A common pitfall that causes this warning is when users copy and paste a glob pattern like *abc* in as a regex, but delimit the regex with start of text and end of text anchors:

      regex
      /^*abc*$/

      The proper configuration should look like this:

      regex
      /abc/

      For more information, see LogScale Regular Expression Engine V2.

Removed

Items that have been removed as of this release.

Storage

  • Segment and lookup file bucket storage upload protocols have been improved in preparation for incoming changes. As a result, the metric bucket-storage-request-upload-queue-overflow has been removed, as the underlying logic this metric was measuring no longer exists.

Configuration

  • Removed the following deprecated configuration variables:

    • S3_STORAGE_FORCED_COPY_SOURCE

    • S3_BUCKET_STORAGE_PREFERRED_MEANS_FORCED

    Users previously using S3_STORAGE_FORCED_COPY_SOURCEshould now use S3_STORAGE_PREFERRED_COPY_SOURCE instead.

  • Removed SEGMENT_TO_HOST_MAPPING_CRASH_SETTLING_TIME_SECONDS configuration as the logic is now handled internally according to Heartbeats.

Deprecation

Items that have been deprecated and may be removed in a future release.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Installation and Deployment

    • LogScale has temporarily downgraded its version of Java to v24 due to a potential regression in Java v25, which could affect digest when using zstd compression in Kafka. The downgrade will remain in effect until the issue is resolved, or Java v25 is confirmed benign.

  • Storage

  • Ingestion

    • The environment variable KAFKA_INGEST_QUEUE_SKIP_ON_ERROR must now be explicitly set to skip messages from the ingest queue. Previously, specific corrupt Kafka records would be automatically skipped, even if the variable was set to false.

  • Queries

    • Filter prefixes have been refactored to change the way they are validated - as a result, the diagnostic message for all prefixes has been changed.

      A query prefix may only contain pure filters. Transformations, aggregations etc. are not allowed. Functions are also disallowed, even if their behavior is purely filtering.

Upgrades

Changes that may occur or be required during an upgrade.

  • User Interface

    • Upgraded the API explorer to GraphiQL version 5.2.0.

  • Configuration

    • LogScale has upgraded its Netty version to 4.2.7.

New features and improvements

  • User Interface

    • The following bulk actions can now be performed on multiple assets:

      • Delete

      • Assign labels

      • Export as .zip file

      Assets that support this feature include:

      • Actions

      • Dashboards

      • Interactions

      • Lookup files

      • Parsers

      • Triggers

      LogScale now also supports enabling and disabling triggers in bulk.

      Corresponding GraphQL Batch operations are also available.

      For more information, see Table Components.

  • Documentation

    • The release note search system has been updated to provide more functionality across a wider range of products. Searching of release notes has been expanded to add support for searching multiple individual products (LogScale, Log Collector, Aux PDF and Humio Operator):

      • We now have full release notes for each of these products with their own dedicated page and entries.

      • Improved search speed and filtering

      • Release note searches can now be saved and shared

      With this change, the Full Release Notes Index page has been deprecated as the new search page provides better functionality for searching the release note system. See RN Issue.

  • Automation and Triggers

    • Added a new system repository humio-trigger-execution-info, which contains information about the execution of triggers. This new system repository is meant to be consumed by other systems; for a human-readable version, refer to the humio-activity repository.

      Currently, this new system repository only contains information about the execution of scheduled searches, not alerts.

    • A new message template for formatting timestamps is now available for providing more formatting options. It applies to query_end, query_start, and triggered timestamps. For example: {format_time(triggered, "yyyy-MM-dd'T'HH:mm:ssX")}.

      For more information, see Message Templates and Variables.

  • Storage

    • Enabled new bucket queue implementation by default. It can be disabled via the NewFileTransferQueuing feature flag.

  • API

    • Added a new parameter nextRunInterval to the POST api/v1/queryjobs endpoint for query submission. This parameter provides a hint to the query engine about the next run's interval, improving performance through partial result reuse.

      Example usage:

      json
      {
        [...]
      
        "nextRunInterval": {
          "start": 1764765006226
          "end": 1764851406227,
        }
      }

      Note

      This parameter and its capability is relevant only when users are submitting the same query over and over for different time intervals.

    • Added a new admin-level API for unsetting a segment's bucketId field. This is for segments that are on disk but not in bucket storage. In cases where a bucket storage has lost data, this API can be used to remove corresponding metadata from LogScale, ending repeated attempts to download the missing files.

      Usage requires a POST call to the following endpoint, where bucketField specifies which bucket field to unset (e.g.,"primary" or "secondary"):

      /api/v1/dataspaces/${dataspaceId}/datasources/${datasourceId}/segments/${segmentId}/unset-bucket-id?bucketField=${bucketField}

      Here's an example:

      shell
      curl https://${clusterUrl}/api/v1/dataspaces/${dataspaceId}/datasources/${datasourceId}/segments/${segmentId}/unset-bucket-id&bucketField=primary
      -H "Authorization: Bearer ${token}"
    • Added the parameter queryKind to the GraphQL mutation analyzeQuery, which indicates what kind of query program is being validated/analyzed.

      Valid values for a standard search query are:

      graphql
      {standardSearch: {} }

      Valid values for a filter-prefix are:

      graphql
      { filterPrefix: {} }
  • Configuration

    • Added a new dynamic configuration GraphQLMaxErrorsCount, to configure the maximum number of errors returned in the GraphQL response errors array. Default value is 100, with valid values between 1 and 10000.

  • Dashboards and Widgets

    • A new styling option in the Table widget now enables to configure custom column labels:

      • Users can now rename column headers directly in the table widget's style configuration panel.

      • Custom column labels are preserved when switching between columns and refreshing the view.

      For more information, see Table Property Reference.

    • A new styling option in the Table widget now allows users to reorder columns. A reset button is also available for restoring the original columns order of the query result.

      For more information, see Table Property Reference.

    • Table widgets now support a new Column overflow setting with options to either truncate or wrap text content. Users can now control how to handle long text entries in table columns, improving readability and visual organization of various data and display preferences.

      The setting is available in the widget style panel under General.

      For more information, see Table Widget.

  • Log Collector

  • Queries

    • Added support in the LogScale Regular Expression Engine V2 for hexadecimal escape sequences up to 4 digits in length using the following formats:

      • \x{n}

      • \x{nn}

      • \x{nnn}

      • \x{nnnn}

      Note

      Curly brackets are required for this syntax. This is in addition to the existing \xnn and \unnnn notations.

    • Added support for repeated backreferences in the LogScale Regular Expression Engine V2 engine. For example, the pattern

      regex
      (.)\1{2,3}

      can now be used to detect sequences of repeated characters.

    • Views can now be configured to resolve saved queries, lookup files and field aliases from a different view or repository.

      For more information, see ???.

  • Fleet Management

    • Added support for optional expiration dates on Log Collector enrollment tokens. Users can now specify when tokens should expire during creation.

      Note

      The default behavior remains unchanged - tokens have no expiration unless explicitly configured.

  • Metrics and Monitoring

    • Added new metrics:

      • currently-submitted-fetches-for-prefetching - Counts the number of pending segment file fetches the prefetcher has requested from the fetching subsystem.

      • currently-submitted-fetches-for-archiving - Counts the number of pending segment file fetches the bucket archiving job has requested from the fetching subsystem.

    • Added new metrics for measuring free slots in the transfer queue:

      • bucket-storage-transfer-free-slots: Measures the number of available slots for bucket transfers within the limits imposed by environment variables such as S3_STORAGE_CONCURRENCY

      • node-to-node-transfer-free-slots: Measures the number of available slots for segment downloads within the limit imposed by the environment variable SEGMENTMOVER_EXECUTOR_CORES

    • Added the metric currently-submitted-fetches-for-queries, which measures the number of segment downloads the query scheduler is actively waiting to complete.

      This metric differs from bucket-storage-fetch-for-query-queue in that the latter counts all fetches the scheduler is planning to do for currently running queries, including those the scheduler has not yet requested.

  • Auditing and Monitoring

    • The following audit log types have been removed:

      • aggregateAlert.add-label

      • aggregateAlert.remove-label

      • filterAlert.add-label

      • filterAlert.remove-label

      The following Audit Log types have been added:

      • saved-query.add-labels

      • saved-query.remove-labels

      • aggregateAlert.add-labels

      • aggregateAlert.remove-labels

      • filterAlert.add-labels

      • filterAlert.remove-labels

      • alert.add-labels

      • alert.remove-labels

      • scheduled-search.add-labels

      • scheduled-search.remove-labels

      • uploaded-file.add-labels

      • uploaded-file.remove-labels

      • action.add-labels

      • action.remove-labels

      • dashboard.add-labels

      • dashboard.remove-labels

    • Added audit logging to the Export to File functionality for query results.

      This adds two new audit log entries:

      • dataspace.query.export-file: when a query is exported to a file.

      • dataspace.query.export-bucket: when a query is streamed to an external file bucket (if the Export to bucket feature flag is enabled).

      All entries include the following data points:

      • actor - Export requester data

      • timestamp - Time of the logging

      • exportedFileName - Exported file name with the file extension chosen

      • queryId - The ID of the related query audit log found through dataspace.query

      • csvFieldsExported (optional) - When exporting a query to CSV, you must select specific fields to include.

      If the query is streamed due to size, the selected fields are added directly to the query as a filter using select().

      When streaming to a bucket, additional fields are added:

      • bucketProvider - The bucket provider used to stream the file to (for example, S3)

      • bucket - The bucket ID used to stream the file to

      To fetch information regarding audits for exported query requests, you can run a join query like defineTable() or correlate() on the queryId. For example:

      logscale
      correlate(
        exports: { type = /dataspace.query.export/ } include: *,
        queries: { type = "dataspace.query" | queryId <=> exports.queryId } include: [query.queryString, query.ingestStart, query.ingestEnd]
      )

Fixed in this release

  • Security

    • The Service Provider-initiated SAML login protocol has been corrected to route to the default provider instead of the first provider listed.

  • Installation and Deployment

    • Fixed an issue in KafkaAdminUtils where a NullPointerException could occur if the code was accessed while a Kafka partition had no leader, causing unnecessary entries in the debug log. This problem has now been fixed.

  • User Interface

  • Storage

    • Fixed an issue affecting clusters with secondary storage where segment files could not be fetched from other nodes or downloaded from bucket storage directly to secondary storage. This issue only occurred when primary storage was approaching capacity and was introduced in version 1.200.

    • Fixed a rare issue preventing segments from being merged.

    • Fixed a bug in the ordering of segment downloads. Downloads for queries now get priority over other downloads.

    • A few issues have been fixed in idle datasource deletion code. The deletion code could delete the last datasource from a partition, which could cause digest to start from scratch on that partition in Kafka.

    • Fixed an issue where an InterruptedException could occur from CurrentHostsSyncJob during system termination, causing unnecessary entries in the debug log. This problem has now been fixed.

    • An issue found in version 1.218.0 could cause bucket uploads to become stuck. This issue has now been fixed.

    • Fixed an issue where a scala.MatchError could be thrown from the metrics system during node shutdown, causing unnecessary entries in the debug log. This problem has now been fixed.

  • Configuration

    • Error messages that point to instructions to MaxMind configuration contained a wrong documentation URL. The URL has now been updated to the correct location.

  • Ingestion

    • Event forwarding rules that reference a saved query will now use the latest version of the saved query if it has been updated.

  • Log Collector

    • Fixed several /api/v1/log-collector endpoints to return proper status codes for invalid credentials.

  • Queries

    • Fixed an issue where the highlighting for query results where regexes with d or F flags displayed incorrect matches. For example, the regex /.*$/d would incorrectly highlight the last line of multi-line text instead of the entire text.

      Note

      This issue impacted the display only. It did not affect actual query results.

    • Fixed an issue where warnings produced when merging worker states, such as groupBy() function limit breaches, were not consistently attached to a user's query results.

  • Fleet Management

    • Adjusted Fleet and Group Management processing to continue applying valid groups when encountering malformed filter queries. Previously, a single group with an invalid filter would prevent all subsequent groups from being processed.

      Note

      The user interface prevents creation of invalid filters, but filters created before LogScale v1.158.0 may contain malformed queries.

  • Metrics and Monitoring

    • Fixed a bug in the ingest-queue-read-offset-progress-job that prevented it from finding the ingest-queue-read-offset metric. This resolves the error message Ingest queue progress error: No ingest-queue-read-offset metrics found for partition that appeared about an hour after cluster restart.

  • Functions

    • Fixed an issue related to serialization where queries including fieldstats() functions or count() with the distinct parameter set to true would sometimes fail, causing the query to be cancelled.

    • Fixed an issue with the match() function lookup structure that occurred when nrows > 1 and keys are prefixes of each other, leading to missing results.

  • Packages

    • Fixed an issue where failed package installations or updates could incorrectly produce audit log events, indicating triggers were created or updated.

Known Issues

  • Storage

    • For clusters using secondary storage where the primary storage on some nodes in the cluster may be getting filled (i.e. the storage usage on the primary disk is halfway between PRIMARY_STORAGE_PERCENTAGE and PRIMARY_STORAGE_MAX_FILL_PERCENTAGE), those nodes may fail to transfer segments from other nodes. The failure will be indicated by the error java.nio.file.AtomicMoveNotSupportedException with message "Invalid cross-device link".

      This does not corrupt data or cause data loss, but will prevent the cluster from being fully healthy, and could also prevent data from reaching adequate replication.

Improvement

  • Security

    • Added the OrganizationOwnedQueries permission to the default Admin role.

      Note

      Existing user's Admin role selections will not be impacted. Only new instances of the Admin role, created when a new customer organization is created, will get this new permission.

  • User Interface

    • Dashboards with query parameters now load faster when displaying large suggestion lists. This improvement prevents dashboard to become unresponsive, which previously occurred when multiple query parameters contained thousands of suggestions.

  • Documentation

    • We have enabled a new search system for the main search pages which includes the following features:

      • Faster and more efficient searching

      • Defaults to searching only the current manuals covering the latest active releases

      • Searching of the full document set is available by selecting the checkbox on the search page

      • Auto-corrections and spelling mistakes are now automatically corrected during the search

      • Suggestions for alternative search terms (e.g. Virtual Private Network in place of VPN); clicking the links will search for the alternative term

      • Highlighting of found search terms on pages when you click through to a page; highlights can be removed by clicking the button at the top of the page

  • Automation and Triggers

    • Fixed a rare issue where rapidly disabling and re-enabling a scheduled search could cause the next scheduled execution to fail.

      The next planned execution time is now preserved when disabling or enabling a scheduled search. It will be updated during the next scheduled search job run after enabling.

  • Storage

    • The global snapshot process has been improved to handle uploads one at a time using a dedicated thread. This ensures global snapshot uploads execute as planned and without delay from other uploads in the queue.

    • Bucket storage prefetch jobs will now download segments from bucket storage to attempt to hit the configured replication factor, even if another node in the cluster already possesses a copy.

    • AWS' Netty-based HTTP client is now the default for S3 bucket operations. It is also the default client for asynchronous operations in AWS SDK v2.

      Users who wish to continue using Apache's Pekko HTTP client can revert by setting S3_NETTY_CLIENT to FALSE, then restarting the cluster.

      This implementation provides the following additional metrics for monitoring the client connection pool:

      • s3-aws-bucket-available-concurrency

      • s3-aws-bucket-leased-concurrency

      • s3-aws-bucket-max-concurrency

      • s3-aws-bucket-pending-concurrency-acquires

      • s3-aws-bucket-concurrency-acquire-duration

      On clusters where non-humio thread dumps are available, it is also possible to look into the state of the client thread pool by searching for the thread name prefix bucketstorage-netty.

      The client is set with default values originating from AWS' SDK Netty client. However, users can fine-tune the client further with the following environment variables:

    • Improved internal queueing logic for bucket uploads and downloads to adjust the order of transfer when there is contention. Transfer order is now as follows:

      1. Segment uploads

      2. Lookup file uploads

      3. Segment downloads

  • Configuration

    • The following environment variables have been renamed to reflect their specific usage:

      • NUMBER_OF_ROWS_IN_SEGMENT_TO_HOST_MAPPING_TABLE changed to NUMBER_OF_ROWS_IN_OWNER_HOSTS_TABLE

      • SEGMENT_TO_HOST_MAPPING_TOPOLOGY_CHANGE_SETTLING_TIME_SECONDS changed to OWNER_HOSTS_TABLE_TOPOLOGY_CHANGE_SETTLING_TIME_SECONDS

  • Ingestion

    • Improved the handling of digest partitions assignment changes. The digest readers now attempt to update the consumed partitions when possible, instead of restarting on changed assignments.

  • Queries

    • Implemented query reuse capability for multi-cluster search worker queries, matching the existing functionality for standard cluster queries.

    • Filter prefix validation has been strengthened: use of query parameters is now explicitly disallowed.

    • Improved performance for the LogScale Regular Expression Engine V2 by optimizing concatenated repetitions of similar scope and body, i.e. greedy vs nongreedy repetitions. For example, the regex pattern .*.*Foo will now be optimized to .*Foo, resulting in significantly improved performance.

    • Added optimization related to tag filters. This improvement should slightly speed up correlate() queries containing tag filters.

    • Improved caching of query states to allow partial reuse of query results when querying by ingest time.

  • Metrics and Monitoring

    • Added two new metrics:

      • cluster-static-query-total-search-cost

      • cluster-static-query-reused-search-cost

      These metrics record the total cost of search and cost of reused parts for queries coordinated on a node.

  • Packages

    • Improved error messages for package assets violating the latest package schema to better identify which asset specifically is causing validation errors. Error messages now contain the name and type of the offending asset.