Falcon LogScale 1.228.2 LTS (2026-04-09)

Version?Type?Release Date?Availability?End of SupportSecurity UpdatesUpgrades From?Downgrades To?Config. Changes?
1.228.2LTS2026-04-09

Cloud

On-Prem

2027-04-30Yes1.150.01.177.0No

Hide file download links

Show file download links

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.223.0, 1.222.0, 1.221.0, 1.220.0

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Security

    • Starting from LogScale version 1.237, support for insecure ldap connections will be removed. Self-Hosted customers using LDAP will only be able to use ldaps secure connections.

  • User Interface

    • From version 1.225.0, LogScale will enforce a new limit of 10 labels that can be added or removed in bulk for assets such as dashboards, actions, alerts and scheduled searches.

      Labels will also have a character limit of 60.

      Existing assets that violate these newly imposed limits will continue to work until they are updated - users will then be forced to remove or reduce their labels to meet the requirement.

  • Queries

    • Due to various upcoming changes to LogScale and the recently introduced regex engine, the following regex features will be removed in version 1.225:

      • Octal notation

      • Quantification of unquantifiable constructs

      Octal notation is being removed due to logic application difficulties and its tendency to make typographical errors easier to overlook.

      Here is an example of a common octal notation issue:

      regex
      /10\.26.\122\.128/

      In this example, \122 is interpreted as the octal escape for R rather than the intended literal 122. Similarly, the . matches not just the punctuation itself but also any single character except for new lines.

      Any construction of \x where x is a number from 1 to 9 will always be interpreted as a backreference to a capture group. If the corresponding capture group does not exist, it will be an error.

      Quantification of unquantifiable constructs is being removed due to lack of appropriate semantic logic, leading to redundancy and errors.

      Unquantifiable constructs being removed include:

      • ^ (the start of string/start of line)

      • $ (the end of string/end of line)

      • ?= (a positive lookahead)

      • ?! (a negative lookahead)

      • ?<= (a positive lookbehind)

      • <?<!> (a negative lookbehind)

      • \b (a word boundary)

      • \B (a non-word boundary)

      For example, the end-of-text construct $* only has meaning for a limited number of occurrences. There can never be more than one occurrence of the end of the text at any given position, making elements like $ redundant.

      A common pitfall that causes this warning is when users copy and paste a glob pattern like *abc* in as a regex, but delimit the regex with start of text and end of text anchors:

      regex
      /^*abc*$/

      The proper configuration should look like this:

      regex
      /abc/

      For more information, see LogScale Regular Expression Engine V2.

Removed

Items that have been removed as of this release.

Configuration

  • Removed the NoCurrentsForBucketSegments feature flag. Its functionality is now permanently enabled.

Deprecation

Items that have been deprecated and may be removed in a future release.

New features and improvements

  • Security

    • Added the dynamic configuration parameter DisableAssetSharing to control whether users have the capability to share assets like dashboards, saved searches, reports, etc. with other users via direct permission assignments. When set to true, only users with changeUserAccess permission can assign direct asset permissions.

      Asset sharing is enabled by default. Administrators can disable it cluster-wide using the dynamic configuration DisableAssetSharing via the GraphQL API.

  • Automation and Triggers

    • Added a new action type for uploading the result of a trigger to an AWS S3 bucket.

      For more information, see Action Type: S3.

  • GraphQL API

    • Added the option for end timestamp functionality for per-repository archiving configuration. This filters out segments with start timestamps later than the configured end timestamp.

      A new optional parameter endAtDateTime has been added to the following GraphQL endpoints:

    • Added ability to search for triggers by name using the GraphQL API. The new name argument can be used with filterAlert, aggregateAlert, and scheduledSearch fields in SearchDomain, Repository, or View types.

      Note

      name and id arguments cannot be used simultaneously.

  • Metrics and Monitoring

    • Added new CPU measurements to the stat_cpu nonsensitive logger:

      • steal

      • guest

      • guestNice

      These fields are available in the humio repository.

Fixed in this release

  • Security

    • Users who have ManageOrganizations (Cloud) or ManageCluster (Self-Hosted) permissions can now change the Data Retention settings above the repository time limit via the web interface. Previously, changing these settings was possible but only via GraphQL, so this inconsistency has now been fixed.

  • User Interface

    • Fixed an issue with the parser duplication dialog in the UI that incorrectly displayed a repository selector. When duplicating a parser, users can now only duplicate within the same repository, matching the API's actual behavior.

      Note

      The repository selector continues to work as expected for other asset types like saved queries, dashboards, and actions.

  • Automation and Triggers

    • Fixed a rare issue where a trigger deletion could be incorrectly logged as a broken trigger.

  • Storage

    • Fixed an issue where disk clean-up would leak aux/hash files on disk when only the aux/hash files were present and not the segment files themselves. This only affects systems where the KeepSegmentHashFiles feature flag has been enabled.

  • Configuration

    • Fixed an issue where LogScale would reuse existing Kafka bootstrap servers when tracking brokers, even when Kafka clients were not allowed to rebootstrap. This could prevent Kafka clients from reaching the correct Kafka cluster. For reference, rebootstrapping solves a common issue that occurs when the connection is lost to all Kafka brokers known to the user based on the most recent metadata request.

      For example, if a user has "Kafka Broker 1" and "Kafka Broker 2" running and attempts to turn on "Kafka Broker 3" and "Kafka Broker 4" while turning off "Kafka Broker 1" and "Kafka Broker 2" at the same time, a non-rebootstrapping user would lose connection to Kafka because only "Kafka Broker 1" and "Kafka Broker 2" are known to it.

      With rebootstrapping enabled, users are able to retry all initial bootstrap servers. If any server is live, the client will not lose connection.

      Kafka clients in LogScale are allowed to rebootstrap by setting the environment variable KAFKA_COMMON_METADATA_RECOVERY_STRATEGY to none.

      Disabling rebootstrapping is generally not recommended. However, it may be necessary if any bootstrap servers that have been specified in KAFKA_SERVERS have a possibility of resolving to a Kafka broker in any cluster other than the original cluster.

      For more information, see the Apache documentation: KIP-899: Allow producer and consumer clients to rebootstrap

  • Ingestion

    • Updated parser/v0.3.0 schema to allow empty rawString values in test cases, ensuring consistency between API-created parsers and YAML export functionality. Previously, parser templates created via CRUD APIs with empty rawString values would fail YAML export due to schema validation.

  • Queries

    • Fixed an issue where an error surfacing during subquery result calculation, such as within join() or defineTable(), would not be visible to the user.

    • Fixed an issue where query results could be incorrectly reused from cache for static queries. Only queries using @ingesttimestamp in conjunction with start() and/or end() functions were affected.

  • Functions

    • Fixed an issue in the match() function where characters with larger lowercase than uppercase UTF-8 representations caused lookup failures.

    • Fixed an issue where prefix values of a certain length could cause an error during the creation of the lookup structure for the match() function.

Known Issues

  • Storage

    • For clusters using secondary storage where the primary storage on some nodes in the cluster may be getting filled (that is, the storage usage on the primary disk is halfway between PRIMARY_STORAGE_PERCENTAGE and PRIMARY_STORAGE_MAX_FILL_PERCENTAGE), those nodes may fail to transfer segments from other nodes. The failure will be indicated by the error java.nio.file.AtomicMoveNotSupportedException with message "Invalid cross-device link".

      This does not corrupt data or cause data loss, but will prevent the cluster from being fully healthy, and could also prevent data from reaching adequate replication.

Improvement

  • Administration and Management

    • For release 1.222.0, several minor internal changes were completed for processes unrelated to the user's experience.

  • Falcon Data Replicator

    • Falcon Data Replicator metrics job now uses an HTTP proxy when FDR_USE_PROXY is enabled.

  • User Interface

    • Restored quick-access query links from the Parsers overview. Users can now access context menu actions to directly navigate to the Search page querying parser events and errors. Options are now as follows:

      • Query parsed events - Quickly view all events parsed by a specific parser

      • Query parser errors - Instantly see parsing errors for troubleshooting

      For more information, see Manage Parsers.

  • Automation and Triggers

    • Enhanced action logging in humio-activity logs:

      • Successfully triggering actions are now logged in the in humio-activity repository with message Invoking action succeeded.

      • Email actions now include messageId field for SMTP or Postmark emails

      • Future SaaS email actions will use mailstrikeTraceId field

      • Test actions now log a Successfully invoked test action message

  • Storage

    • Aligned the check completed during S3 archiving configuration validation with actual archiving upload behavior, enabling support for buckets using Amazon S3 Object Lock.

  • Configuration

    • Migrated to official Apache Pekko releases from internal fork. Fixed Google Cloud Storage authentication scope placement to ensure proper handling of read/write permissions.

    • Added validation checks for the configuration variable NODE_ROLES to ensure that they are set only to allowed values (all, httponly, and ingestonly). Invalid node role configurations now prevent LogScale from starting and notify users with an exception error message.

      For more information, see NODE_ROLES.

  • Ingestion

    • Improved LogScale's Parser Generator dialog to better handle sample log files:

      • Added clear error messages for log lines exceeding character limits

      • Fixed processing of mixed-size log lines to ensure all valid lines are included

  • Log Collector

    • Implemented disk-based caching for Log Collector artifacts (installers, binaries, scripts) to reduce update server load. The cache automatically manages artifact cleanup based on manifest presence and configurable disk quota limits.

  • Queries

    • Enhanced query performance by implementing hash filter file caching for frequently accessed bucketed segments, even when queries only require hash filter files for search operations.

    • Improved caching of query states to allow partial reuse of query results when querying by event time, improving query performance while reducing query costs.

  • Functions

    • Using the readFile() function with the include argument will now output the columns in the order that the values were provided in the include array.