Falcon LogScale 1.228.2 LTS (2026-04-09)
| Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Downgrades To? | Config. Changes? |
|---|---|---|---|---|---|---|---|---|
| 1.228.2 | LTS | 2026-04-09 | Cloud On-Prem | 2027-04-30 | Yes | 1.150.0 | 1.177.0 | No |
Hide file download links
Download
Use docker pull humio/humio-core:1.228.2 to download the latest version
Hide file hashes
These notes include entries from the following previous releases: 1.223.0, 1.222.0, 1.221.0, 1.220.0
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Security
Starting from LogScale version 1.237, support for insecure
ldapconnections will be removed. Self-Hosted customers using LDAP will only be able to useldapssecure connections.User Interface
From version 1.225.0, LogScale will enforce a new limit of 10 labels that can be added or removed in bulk for assets such as dashboards, actions, alerts and scheduled searches.
Labels will also have a character limit of 60.
Existing assets that violate these newly imposed limits will continue to work until they are updated - users will then be forced to remove or reduce their labels to meet the requirement.
Queries
Due to various upcoming changes to LogScale and the recently introduced regex engine, the following regex features will be removed in version 1.225:
Octal notation
Quantification of unquantifiable constructs
Octal notation is being removed due to logic application difficulties and its tendency to make typographical errors easier to overlook.
Here is an example of a common octal notation issue:
regex/10\.26.\122\.128/In this example,
\122is interpreted as the octal escape forRrather than the intended literal122. Similarly, the.matches not just the punctuation itself but also any single character except for new lines.Any construction of
\xwherexis a number from 1 to 9 will always be interpreted as a backreference to a capture group. If the corresponding capture group does not exist, it will be an error.Quantification of unquantifiable constructs is being removed due to lack of appropriate semantic logic, leading to redundancy and errors.
Unquantifiable constructs being removed include:
^(the start of string/start of line)
$(the end of string/end of line)
?=(a positive lookahead)
?!(a negative lookahead)?<= (a positive lookbehind)
<?<!> (a negative lookbehind)
\b(a word boundary)
\B(a non-word boundary)For example, the end-of-text construct
$*only has meaning for a limited number of occurrences. There can never be more than one occurrence of the end of the text at any given position, making elements like$redundant.A common pitfall that causes this warning is when users copy and paste a glob pattern like
*abc*in as a regex, but delimit the regex with start of text and end of text anchors:regex/^*abc*$/The proper configuration should look like this:
regex/abc/For more information, see LogScale Regular Expression Engine V2.
Removed
Items that have been removed as of this release.
Configuration
Removed the
NoCurrentsForBucketSegmentsfeature flag. Its functionality is now permanently enabled.
Deprecation
Items that have been deprecated and may be removed in a future release.
In order to simplify and clean up older documentation and manuals that refer to past versions of LogScale and related products, the following manual versions will be archived after 15th December 2025:
This archiving will improve the efficiency of the site and navigability.
Archived manuals will be available in a download-only format in an archive area of the documentation. Manuals that have been archived will no longer be included in the search, or accessible to view online through the documentation portal.
The following GraphQL APIs are deprecated and will be removed in version 1.225 or later:
In the updateSettings mutation, these input arguments are deprecated:
isPackageDocsMessageDismissed
isDarkModeMessageDismissed
isResizableQueryFieldMessageDismissed
On the UserSettings type, these fields are deprecated:
isPackageDocsMessageDismissed
isDarkModeMessageDismissed
Note
The deprecated input arguments will have no effect, and the deprecated fields will always return true until their removal.
The userId parameter for the updateDashboardToken GraphQL mutation has been deprecated and will be removed in version 1.273.
rdns()has been deprecated and will be removed in version 1.249. UsereverseDns()as an alternative function.The Secondary Storage feature is now deprecated and will be removed in LogScale 1.231.0.
The Bucket Storage feature provides superior functionality for storing rarely queried data in cheaper storage while keeping frequently queried data in hot storage (fast and expensive). For more information, see Bucket Storage.
Please contact LogScale support for any concerns about this deprecation.
New features and improvements
Security
Added the dynamic configuration parameter
DisableAssetSharingto control whether users have the capability to share assets like dashboards, saved searches, reports, etc. with other users via direct permission assignments. When set totrue, only users withchangeUserAccesspermission can assign direct asset permissions.Asset sharing is enabled by default. Administrators can disable it cluster-wide using the dynamic configuration
DisableAssetSharingvia the GraphQL API.
Automation and Triggers
Added a new action type for uploading the result of a trigger to an AWS S3 bucket.
For more information, see Action Type: S3.
GraphQL API
Added the option for end timestamp functionality for per-repository archiving configuration. This filters out segments with start timestamps later than the configured end timestamp.
A new optional parameter
endAtDateTimehas been added to the following GraphQL endpoints:Added ability to search for triggers by name using the GraphQL API. The new name argument can be used with filterAlert, aggregateAlert, and scheduledSearch fields in SearchDomain, Repository, or View types.
Note
name and id arguments cannot be used simultaneously.
Metrics and Monitoring
Added new CPU measurements to the
stat_cpunonsensitive logger:stealguestguestNice
These fields are available in the humio repository.
Fixed in this release
Security
Users who have
ManageOrganizations(Cloud) orManageCluster(Self-Hosted) permissions can now change the Data Retention settings above the repository time limit via the web interface. Previously, changing these settings was possible but only via GraphQL, so this inconsistency has now been fixed.
User Interface
Fixed an issue with the parser duplication dialog in the UI that incorrectly displayed a repository selector. When duplicating a parser, users can now only duplicate within the same repository, matching the API's actual behavior.
Note
The repository selector continues to work as expected for other asset types like saved queries, dashboards, and actions.
Automation and Triggers
Fixed a rare issue where a trigger deletion could be incorrectly logged as a broken trigger.
Storage
Fixed an issue where disk clean-up would leak aux/hash files on disk when only the aux/hash files were present and not the segment files themselves. This only affects systems where the
KeepSegmentHashFilesfeature flag has been enabled.
Configuration
Fixed an issue where LogScale would reuse existing Kafka bootstrap servers when tracking brokers, even when Kafka clients were not allowed to rebootstrap. This could prevent Kafka clients from reaching the correct Kafka cluster. For reference, rebootstrapping solves a common issue that occurs when the connection is lost to all Kafka brokers known to the user based on the most recent metadata request.
For example, if a user has "Kafka Broker 1" and "Kafka Broker 2" running and attempts to turn on "Kafka Broker 3" and "Kafka Broker 4" while turning off "Kafka Broker 1" and "Kafka Broker 2" at the same time, a non-rebootstrapping user would lose connection to Kafka because only "Kafka Broker 1" and "Kafka Broker 2" are known to it.
With rebootstrapping enabled, users are able to retry all initial bootstrap servers. If any server is live, the client will not lose connection.
Kafka clients in LogScale are allowed to rebootstrap by setting the environment variable
KAFKA_COMMON_METADATA_RECOVERY_STRATEGYtonone.Disabling rebootstrapping is generally not recommended. However, it may be necessary if any bootstrap servers that have been specified in
KAFKA_SERVERShave a possibility of resolving to a Kafka broker in any cluster other than the original cluster.For more information, see the Apache documentation: KIP-899: Allow producer and consumer clients to rebootstrap
Ingestion
Updated parser/v0.3.0 schema to allow empty rawString values in test cases, ensuring consistency between API-created parsers and YAML export functionality. Previously, parser templates created via CRUD APIs with empty rawString values would fail YAML export due to schema validation.
Queries
Fixed an issue where an error surfacing during subquery result calculation, such as within
join()ordefineTable(), would not be visible to the user.Fixed an issue where query results could be incorrectly reused from cache for static queries. Only queries using @ingesttimestamp in conjunction with
start()and/orend()functions were affected.
Functions
Known Issues
Storage
For clusters using secondary storage where the primary storage on some nodes in the cluster may be getting filled (that is, the storage usage on the primary disk is halfway between
PRIMARY_STORAGE_PERCENTAGEandPRIMARY_STORAGE_MAX_FILL_PERCENTAGE), those nodes may fail to transfer segments from other nodes. The failure will be indicated by the error java.nio.file.AtomicMoveNotSupportedException with message "Invalid cross-device link".This does not corrupt data or cause data loss, but will prevent the cluster from being fully healthy, and could also prevent data from reaching adequate replication.
Improvement
Administration and Management
For release 1.222.0, several minor internal changes were completed for processes unrelated to the user's experience.
Falcon Data Replicator
Falcon Data Replicator metrics job now uses an HTTP proxy when
FDR_USE_PROXYis enabled.
User Interface
Restored quick-access query links from the
Parsersoverview. Users can now access context menu actions to directly navigate to theSearchpage querying parser events and errors. Options are now as follows:- Quickly view all events parsed by a specific parser
- Instantly see parsing errors for troubleshooting
For more information, see Manage Parsers.
Automation and Triggers
Enhanced action logging in humio-activity logs:
Successfully triggering actions are now logged in the in humio-activity repository with message
Invoking action succeeded.Email actions now include messageId field for SMTP or Postmark emails
Future SaaS email actions will use mailstrikeTraceId field
Test actions now log a
Successfully invoked test actionmessage
Storage
Aligned the check completed during S3 archiving configuration validation with actual archiving upload behavior, enabling support for buckets using Amazon S3 Object Lock.
Configuration
Migrated to official Apache Pekko releases from internal fork. Fixed Google Cloud Storage authentication scope placement to ensure proper handling of read/write permissions.
Added validation checks for the configuration variable
NODE_ROLESto ensure that they are set only to allowed values (all,httponly, andingestonly). Invalid node role configurations now prevent LogScale from starting and notify users with an exception error message.For more information, see
NODE_ROLES.
Ingestion
Improved LogScale's Parser Generator dialog to better handle sample log files:
Added clear error messages for log lines exceeding character limits
Fixed processing of mixed-size log lines to ensure all valid lines are included
Log Collector
Implemented disk-based caching for Log Collector artifacts (installers, binaries, scripts) to reduce update server load. The cache automatically manages artifact cleanup based on manifest presence and configurable disk quota limits.
Queries
Enhanced query performance by implementing hash filter file caching for frequently accessed bucketed segments, even when queries only require hash filter files for search operations.
Improved caching of query states to allow partial reuse of query results when querying by event time, improving query performance while reducing query costs.
Functions
Using the
readFile()function with theincludeargument will now output the columns in the order that the values were provided in theincludearray.