Falcon LogScale 1.228.1 LTS (2026-03-10)
| Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Downgrades To? | Config. Changes? |
|---|---|---|---|---|---|---|---|---|
| 1.228.1 | LTS | 2026-03-10 | Cloud On-Prem | 2027-03-31 | Yes | 1.150.0 | 1.177.0 | No |
Hide file download links
Download
Use docker pull humio/humio-core:1.228.1 to download the latest version
Hide file hashes
These notes include entries from the following previous releases: 1.228.0, 1.227.0, 1.226.0, 1.225.0, 1.224.0, 1.223.0, 1.222.0, 1.221.0, 1.220.0
Bug fixes and updates.
Breaking Changes
The following items create a breaking change in the behavior, response or operation of this release.
Storage
Removed the feature flag
WriteNewSegmentFileFormat, making the new segment file format mandatory. This feature was introduced in version 1.138 to improve segment file compression and became enabled by default in version 1.162.Important
After deploying this version, clusters cannot be downgraded to versions older than 1.177.
GraphQL API
Improved resource management controls to ensure system stability and performance for GraphQL query processing. These changes will not impact normal usage of LogScale's UI and API.
Configuration
The
MAX_GRAPHQL_QUERY_DEPTHenvironment variable has been removed. Use theGraphQLQueryDepthLimitdynamic configuration variable instead.For information about setting dynamic configurations, see Setting a Dynamic Configuration Value. A list of available GraphQL dynamic configurations can be found at Dynamic Configuration Parameters when filtering by "GraphQL".
Advance Warning
The following items are due to change in a future release.
Security
Starting from LogScale version 1.237, support for insecure
ldapconnections will be removed. Self-Hosted customers using LDAP will only be able to useldapssecure connections.User Interface
From version 1.225.0, LogScale will enforce a new limit of 10 labels that can be added or removed in bulk for assets such as dashboards, actions, alerts and scheduled searches.
Labels will also have a character limit of 60.
Existing assets that violate these newly imposed limits will continue to work until they are updated - users will then be forced to remove or reduce their labels to meet the requirement.
Queries
Due to various upcoming changes to LogScale and the recently introduced regex engine, the following regex features will be removed in version 1.225:
Octal notation
Quantification of unquantifiable constructs
Octal notation is being removed due to logic application difficulties and its tendency to make typographical errors easier to overlook.
Here is an example of a common octal notation issue:
regex/10\.26.\122\.128/In this example,
\122is interpreted as the octal escape forRrather than the intended literal122. Similarly, the.matches not just the punctuation itself but also any single character except for new lines.Any construction of
\xwherexis a number from 1 to 9 will always be interpreted as a backreference to a capture group. If the corresponding capture group does not exist, it will be an error.Quantification of unquantifiable constructs is being removed due to lack of appropriate semantic logic, leading to redundancy and errors.
Unquantifiable constructs being removed include:
^(the start of string/start of line)
$(the end of string/end of line)
?=(a positive lookahead)
?!(a negative lookahead)?<= (a positive lookbehind)
<?<!> (a negative lookbehind)
\b(a word boundary)
\B(a non-word boundary)For example, the end-of-text construct
$*only has meaning for a limited number of occurrences. There can never be more than one occurrence of the end of the text at any given position, making elements like$redundant.A common pitfall that causes this warning is when users copy and paste a glob pattern like
*abc*in as a regex, but delimit the regex with start of text and end of text anchors:regex/^*abc*$/The proper configuration should look like this:
regex/abc/For more information, see LogScale Regular Expression Engine V2.
Removed
Items that have been removed as of this release.
GraphQL API
The following fields for the GraphQL mutation ViewInteractionEntry have been removed:
id
interaction
packageId
package
view
As an alternative, users can utilize the GraphQL datatype viewInteraction instead, as this provides access to view interaction data via a stable API surface.
Configuration
Removed the
NoCurrentsForBucketSegmentsfeature flag. Its functionality is now permanently enabled.The environment variable
TEMP_SHORTCUT_EXTERNAL_FUNCTION_CALLSis no longer used by LogScale and can be safely removed.
Deprecation
Items that have been deprecated and may be removed in a future release.
In order to simplify and clean up older documentation and manuals that refer to past versions of LogScale and related products, the following manual versions will be archived after 15th December 2025:
This archiving will improve the efficiency of the site and navigability.
Archived manuals will be available in a download-only format in an archive area of the documentation. Manuals that have been archived will no longer be included in the search, or accessible to view online through the documentation portal.
The following GraphQL APIs are deprecated and will be removed in version 1.225 or later:
In the updateSettings mutation, these input arguments are deprecated:
isPackageDocsMessageDismissed
isDarkModeMessageDismissed
isResizableQueryFieldMessageDismissed
On the UserSettings type, these fields are deprecated:
isPackageDocsMessageDismissed
isDarkModeMessageDismissed
Note
The deprecated input arguments will have no effect, and the deprecated fields will always return true until their removal.
The userId parameter for the updateDashboardToken GraphQL mutation has been deprecated and will be removed in version 1.273.
rdns()has been deprecated and will be removed in version 1.249. UsereverseDns()as an alternative function.The Secondary Storage feature is now deprecated and will be removed in LogScale 1.231.0.
The Bucket Storage feature provides superior functionality for storing rarely queried data in cheaper storage while keeping frequently queried data in hot storage (fast and expensive). For more information, see Bucket Storage.
Please contact LogScale support for any concerns about this deprecation.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Storage
Revised bucket transfer priority to the following, in descending order:
Segment uploads transferred to bucket storage for replication
Lookup file uploads transferred to bucket storage for replication
Downloads of minisegments for queries
Downloads of other segments for queries
Segment uploads for disaster recovery migration
Segment downloads for background operations
Configuration
The environment variable
VALIDATE_BLOCK_CRCS_BEFORE_UPLOADhas been removed to guarantee segment validation before uploading segment files to bucket storage. Previously, this environment variable was set totrueby default, allowing users to disable this functionality by disabling checking block CRCs prior to upload.Queries
The
QuerySessionsclass now propagates user permission changes to running static queries, allowing them to end or restart as necessary. Previously, this behavior was only applied to live queries.
Upgrades
Changes that may occur or be required during an upgrade.
Installation and Deployment
Upgraded LogScale's bundled Java Development Kit (JDK) to version 25.0.2, resolving the Transparent Huge Pages (THP) issue mentioned in release 1.213.0 (see RN Issue), where systems configured with THP mode as
madvisedid not enable huge pages when running with the default garbage collector.
New features and improvements
Security
Added the dynamic configuration parameter
DisableAssetSharingto control whether users have the capability to share assets like dashboards, saved searches, reports, etc. with other users via direct permission assignments. When set totrue, only users withchangeUserAccesspermission can assign direct asset permissions.Asset sharing is enabled by default. Administrators can disable it cluster-wide using the dynamic configuration
DisableAssetSharingvia the GraphQL API.
User Interface
The
Searchweb interface has a new layout design. The update includes:Visualization selection of widget types now presented as a display tab.
Smart tab grouping with dropdown selectors for multiple Source Events and Table tabs.
Events display options toolbar repositioned at the top of the Results panel.
Enhanced field statistics with improved performance.
Overall improved layout and user experience.
No action is required — users will automatically see the new design when searching.
Automation and Triggers
It is now possible to configure filter and aggregate alerts to throttle on multiple fields.
To support this change, the following GraphQL changes have been made:
The GraphQL argument throttleField has been deprecated and replaced with with throttleFields for types FilterAlert, AggregateAlert, UnsavedFilterAlert, and UnsavedAggregateAlert.
The GraphQL argument throttleField has been deprecated and replaced it with throttleFields in mutations createFilterAlert() and createAggregateAlert().
Mutations updateFilterAlert() and updateAggregateAlert() have been deprecated and replaced with updateFilterAlertV2() and updateAggregateAlertV2().
The main difference is that the throttleField field is being removed and a throttleFields field is being added.
Added a new action type for uploading the result of a trigger to an AWS S3 bucket.
For more information, see Action Type: S3.
GraphQL API
Added the option for end timestamp functionality for per-repository archiving configuration. This filters out segments with start timestamps later than the configured end timestamp.
A new optional parameter
endAtDateTimehas been added to the following GraphQL endpoints:Extended the analyzeQuery() GraphQL endpoint to support alerts. The
queryKindparameter now supports the following values:For filter alerts: { filterAlert: {} }
For aggregate alerts: { aggregateAlert: {} }
For legacy alerts: { legacyAlert: {} }
Note
Alerts have restrictions beyond the query string, in particular regarding the time interval of a query. Those restrictions are outside the scope of the validation done by analyzeQuery().
Added ability to search for triggers by name using the GraphQL API. The new name argument can be used with filterAlert, aggregateAlert, and scheduledSearch fields in SearchDomain, Repository, or View types.
Note
name and id arguments cannot be used simultaneously.
API
Added
tableTypeto the filesUsed field in query results from the QueryJobs API to indicate the type and origination of the table being referenced.
Configuration
Introduced new environment variables to configure the Netty HTTP client, specifically for bucket operations.
When the value of
S3_NETTY_CLIENTistrue, the following environment variables are available:S3_NETTY_READ_TIMEOUT_SECONDS— Determines the amount of time to wait for a read on a socket before an exception is thrown. The default value is 120 seconds.S3_NETTY_WRITE_TIMEOUT_SECONDS— Determines the amount of time to wait for a write on a socket before an exception is thrown. Default value is 30 seconds.
Dashboards and Widgets
Enhanced Schedule PDF Reports behavior:
If a report times out more than the value set in
SCHEDULED_REPORT_MAX_RETRY_ATTEMPTS(default is 5), the report is automatically disabled.When a report is disabled for any reason (timeouts or specific errors), an email notification is sent to the intended report recipient.
Queries
Added support for unicode categories in LogScale Regular Expression Engine V2 using
\p{L}syntax. Supported categories include:Letters (
L)Symbols (
S)Punctuation (
P)Control characters (
Cc)
These categories can also be used in character classes like
[\p{S}A-Z])and negated using\P{L}.For more information, see Regular Expression Engine V2 Syntax Patterns.
Added support for
(?P<X>)syntax for named capturing groups in the LogScale Regular Expression Engine V2. This syntax is functionally equivalent to the existing(?<X>)syntax.
Metrics and Monitoring
Added new CPU measurements to the
stat_cpunonsensitive logger:stealguestguestNice
These fields are available in the humio repository.
Functions
Released the new query function
explain:asTable(), which provides detailed insights into query performance by showing a step-by-step breakdown of time consumption and event filtering throughout the query.
Fixed in this release
Security
Users who have
ManageOrganizations(Cloud) orManageCluster(Self-Hosted) permissions can now change the Data Retention settings above the repository time limit via the web interface. Previously, changing these settings was possible but only via GraphQL, so this inconsistency has now been fixed.Fixed an issue with JSON Web Token (JWT) authentication, where simultaneous user creation requests across different nodes would fail with the error message User already exists. Now when authenticating with LogScale using a JWT, if the username specified in the token for the user claim does not exist, the user will be created automatically — the process is also self-correcting to avoid similar errors in the future.
System and organization API tokens could not be used for certain view-related routes, even when the tokens contained the necessary permissions. This issue has now been fixed.
User Interface
Fixed an issue with the parser duplication dialog in the UI that incorrectly displayed a repository selector. When duplicating a parser, users can now only duplicate within the same repository, matching the API's actual behavior.
Note
The repository selector continues to work as expected for other asset types like saved queries, dashboards, and actions.
Fixed an issue with correlate query graph visualization, where nodes and edges would not render correctly in certain circumstances.
These two wrong behaviors have been fixed in the web interface:
The Events tab would not show when the main correlate query did not return results.
A wrong default widget was selected by the Widget selector.
Automation and Triggers
Fixed an issue where the creation of a scheduled report without parameter values would result in an invalid and failing result.
Fixed an issue where parameters set by the user during the creation of Schedule PDF Reports were sometimes not saved. To minimize disruption to the user, reports that used default dashboard values for parameters will not require any change — reports will generate using default values.
Fixed an issue with scheduled searches where schedule changes would only be applied to runs after "now". To achieve this, the GraphQL datatype ScheduledSearch has undergone the following changes:
GraphQL fields lastExecuted and lastTriggered have been deprecated.
GraphQL fields timeOfLastExecution and timeOfLastTrigger have been added.
The new fields contain the actual execution time of the query. The deprecated fields contained the end time of the search interval of the last query that was executed or triggered.
Note
The new fields will only have a different value for scheduled searches running on @timestamp where the parameter
searchIntervalOffsetSecondsis set to a value greater than 0.For more information, see ScheduledSearch .
Fixed a rare issue where a trigger deletion could be incorrectly logged as a broken trigger.
Storage
An error log stating Unexpected normal segment in segments missed by coordinator was displayed when a view was being restored from deletion. This issue has now been fixed.
Fixed an issue where global snapshot failure would prevent further attempts until system restart.
Events containing the ASCII character
\NULin field values could be stored in a corrupted format, and blocks containing such events may have been corrupt as well: as a consequence, such fields may have contained incorrect values when displayed or filtered. This issue has now been fixed.Fixed an issue occurring during offset calculation for digest that could cause minisegments that go missing before being fully replicated to be incorrectly deleted and replayed from Kafka.
This occurred only in datasources that were recently created or whose status had recently changed from idle to non-idle. In the future, these minisegments will appear in the cluster admin panel designated as "absent".
Fixed an issue where a failing assertion in
DataSyncJobcould cause a system crash in very rare cases.Fixed an issue where disk clean-up would leak aux/hash files on disk when only the aux/hash files were present and not the segment files themselves. This only affects systems where the
KeepSegmentHashFilesfeature flag has been enabled.Fixed an issue with task cancellation in the node-to-node segment fetcher that could cause a terminating node to drop a copy of the segment file it was fetching.
Fixed an issue where nodes could enter a repeated download and deletion loop of the same segment due to over-replication.
API
An issue has been fixed in how nextRunInterval is applied to subqueries: when
cacheHintis supplied for a query, it is now correctly propagated to subqueries (for example, in queries using thedefineTable()function).
Configuration
Fixed an issue where LogScale would reuse existing Kafka bootstrap servers when tracking brokers, even when Kafka clients were not allowed to rebootstrap. This could prevent Kafka clients from reaching the correct Kafka cluster. For reference, rebootstrapping solves a common issue that occurs when the connection is lost to all Kafka brokers known to the user based on the most recent metadata request.
For example, if a user has "Kafka Broker 1" and "Kafka Broker 2" running and attempts to turn on "Kafka Broker 3" and "Kafka Broker 4" while turning off "Kafka Broker 1" and "Kafka Broker 2" at the same time, a non-rebootstrapping user would lose connection to Kafka because only "Kafka Broker 1" and "Kafka Broker 2" are known to it.
With rebootstrapping enabled, users are able to retry all initial bootstrap servers. If any server is live, the client will not lose connection.
Kafka clients in LogScale are allowed to rebootstrap by setting the environment variable
KAFKA_COMMON_METADATA_RECOVERY_STRATEGYtonone.Disabling rebootstrapping is generally not recommended. However, it may be necessary if any bootstrap servers that have been specified in
KAFKA_SERVERShave a possibility of resolving to a Kafka broker in any cluster other than the original cluster.For more information, see the Apache documentation: KIP-899: Allow producer and consumer clients to rebootstrap
Ingestion
Updated parser/v0.3.0 schema to allow empty rawString values in test cases, ensuring consistency between API-created parsers and YAML export functionality. Previously, parser templates created via CRUD APIs with empty rawString values would fail YAML export due to schema validation.
Fixed an issue where Amazon Simple Queue Service (SQS) permissions problems were not appearing in the activity log for ingest feeds.
Queries
Fixed an issue where using the
likeoperator in a query would sometimes cause the query to malfunction and return no results in the Event list.Fixed an issue where an error surfacing during subquery result calculation, such as within
join()ordefineTable(), would not be visible to the user.Fixed an issue where query results could be incorrectly reused from cache for static queries. Only queries using @ingesttimestamp in conjunction with
start()and/orend()functions were affected.
Functions
Fixed an issue in the
match()function where characters with larger lowercase than uppercase UTF-8 representations caused lookup failures.Fixed an issue where prefix values of a certain length could cause an error during the creation of the lookup structure for the
match()function.Fixed an issue where using the function
wildcard()as part of an expression (for exampletest(wildcard(...))) would result in an internal server error. The proper query validation error now correctly displays in the query editor.
Known Issues
Storage
For clusters using secondary storage where the primary storage on some nodes in the cluster may be getting filled (that is, the storage usage on the primary disk is halfway between
PRIMARY_STORAGE_PERCENTAGEandPRIMARY_STORAGE_MAX_FILL_PERCENTAGE), those nodes may fail to transfer segments from other nodes. The failure will be indicated by the error java.nio.file.AtomicMoveNotSupportedException with message "Invalid cross-device link".This does not corrupt data or cause data loss, but will prevent the cluster from being fully healthy, and could also prevent data from reaching adequate replication.
Improvement
Installation and Deployment
Improved Indicator of Compromise (IoC) service efficiency by preventing unnecessary full downloads from the remote IoC server or CrowdStrike API when data is already present in the cluster.
Administration and Management
For release 1.222.0, several minor internal changes were completed for processes unrelated to the user's experience.
Falcon Data Replicator
Falcon Data Replicator metrics job now uses an HTTP proxy when
FDR_USE_PROXYis enabled.
User Interface
Restored quick-access query links from the
Parsersoverview. Users can now access context menu actions to directly navigate to theSearchpage querying parser events and errors. Options are now as follows:- Quickly view all events parsed by a specific parser
- Instantly see parsing errors for troubleshooting
For more information, see Manage Parsers.
Automation and Triggers
The
Triggersoverview page has been updated with the following improvements:Sorting is now available for all columns.
The Search... field now supports filtering across all columns.
The and filtering options are now available for quickly selecting all items and then excluding single items, and for quickly identifying triggers with no label, action, or package attached.
Both options are available for the Labels, Actions and Packages columns.
For more information, see Manage triggers.
Enhanced action logging in humio-activity logs:
Successfully triggering actions are now logged in the in humio-activity repository with message
Invoking action succeeded.Email actions now include messageId field for SMTP or Postmark emails
Future SaaS email actions will use mailstrikeTraceId field
Test actions now log a
Successfully invoked test actionmessage
Storage
Aligned the check completed during S3 archiving configuration validation with actual archiving upload behavior, enabling support for buckets using Amazon S3 Object Lock.
Added a delay between retry attempts when global snapshot uploads fail.
Configuration
Migrated to official Apache Pekko releases from internal fork. Fixed Google Cloud Storage authentication scope placement to ensure proper handling of read/write permissions.
Added validation checks for the configuration variable
NODE_ROLESto ensure that they are set only to allowed values (all,httponly, andingestonly). Invalid node role configurations now prevent LogScale from starting and notify users with an exception error message.For more information, see
NODE_ROLES.
Ingestion
Improved LogScale's Parser Generator dialog to better handle sample log files:
Added clear error messages for log lines exceeding character limits
Fixed processing of mixed-size log lines to ensure all valid lines are included
Log Collector
Implemented disk-based caching for Log Collector artifacts (installers, binaries, scripts) to reduce update server load. The cache automatically manages artifact cleanup based on manifest presence and configurable disk quota limits.
Queries
Enhanced query performance by implementing hash filter file caching for frequently accessed bucketed segments, even when queries only require hash filter files for search operations.
Function names are no longer reserved words in CrowdStrike Query Language (CQL). As a result, adding new functions will not risk accidentally rendering existing queries invalid. Going forward, a word is only interpreted as a function call if it is immediately followed by a starting parenthesis.
For example, the word
"test"was previously a reserved word and required to be quoted because it also happens to be the name of a function (test()) - it can now be written without quotes.For more information, see Appendix D - Reserved Words.
Optimized performance for Regular Expression Engine v2 regarding zero-or-more repetitions of single character regex matches at the start of regexes. For example, regexes such as
/.*foo/now complete more quickly, also compared to the previous engine.The election process regarding slow queries has been updated to the following parameters:
Changed the threshold from 100 times slower to 500 times slower for vote casting.
Increased vote timeout from 5 minutes to 15 minutes.
When a node is elected as problematic by the entire cluster within the timeout period, it is logged with the message These nodes were deemed bad by the rest of the cluster.
Improved query throttling for segment merges. Queries are not throttled if segment merging falls behind due to slow segment fetches.
Improved caching of query states to allow partial reuse of query results when querying by event time, improving query performance while reducing query costs.
Fleet Management
Fleet Management now performs a staged rollout of collector version updates within groups to prevent simultaneous updates of all collectors.
Auditing and Monitoring
Added logging for topic-level configurations to
KafkaStatusLoggerJob.
Functions
Using the
readFile()function with theincludeargument will now output the columns in the order that the values were provided in theincludearray.
Other
The The http server closed the connection unexpectedly message now appears at the informational level instead of the error level, as this is expected behavior if any requests fail to complete quickly during shutdown.