Latest LTS Release

Falcon LogScale 1.189.1 LTS (2025-06-11)

Version?Type?Release Date?Availability?End of SupportSecurity UpdatesUpgrades From?Downgrades To?Config. Changes?
1.189.1LTS2025-06-11

Cloud

On-Prem

2026-06-30Yes1.150.01.177.0No

Hide file download links

Show file download links

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.189.0, 1.188.0, 1.187.0, 1.186.0, 1.185.0, 1.184.0

Bug fixes and updates.

Breaking Changes

The following items create a breaking change in the behavior, response or operation of this release.

  • Packages

    • Previously, LogScale would allow dashboard YAML template files to not contain a $schema field, but this is no longer optional. The $schema field is what LogScale uses to determine how it should read the template file, so it is important that it is correct. Before this change, if the $schema field was missing from a dashboard template, LogScale would assume the file was a dashboard template, using the dashboard schema version 0.1.0, which was released in 2020. As this old schema version doesn't recognize any features released since then, using it as the default value can cause confusing error messages if you try to omit the $schema field. Therefore, the field is now required instead. If you now have a dashboard YAML template file that __ls_shortname_ rejects due to this change, try adding the following line to the file: $schema": "https://schemas.humio.com/dashboard/v0.1.0, which should make it work as before.

Advance Warning

The following items are due to change in a future release.

  • Functions

    • Starting from release 1.195, the query functions asn() and ipLocation() will display an error instead of a warning should an error occur with their external dependency. This change will align their behavior to functions using similar external resources, like match(), iocLookup(), and cidr().

Removed

Items that have been removed as of this release.

Installation and Deployment

  • Support for the HUMIO_DEBUG, JAVA_DEBUG_PORT, DEBUG_SUSPEND_FLAG and JAVA_DEBUG_OPTS environment variables in the LogScale Launcher Script has been removed. If the LogScale process needs to be started in debug mode, set the relevant flags in the HUMIO_OPTS environment variable instead.

Administration and Management

  • Humio-Usage v0.2.0 dashboard data has been removed and replaced with a note and link to the Usage Page.

  • Removed assigned metrics:

    • segments-assigned-to-host-as-owner

    • segment-bytes-assigned-to-host-as-owner

    These metrics provided incomplete data, tracking only post-merge segment assignments while excluding rebalancing-related segment movements.

GraphQL API

  • The following deprecated GraphQL fields have now been removed on the Parser output datatype:

    • assetType

    • sourceCode

    • tagFields

    • testData

  • The following deprecated GraphQL mutations have been removed:

  • The deprecated testParser GraphQL mutation has now been removed.

    Note that a number of parser CRUD APIs were deprecated alongside testParser back in release 1.120, and these APIs will also be removed soon. Consider this as a reminder to move to the newer APIs if you have not already done so.

  • The deprecated storage task of the GraphQL NodeTaskEnum has been removed (deprecated since v1.173.0). For more information, see ???.

    This removal affects hosts configured with node role all:

    • Dynamic configuration to disable segment storage and search is no longer supported

    • Use existing node eviction mechanism instead for this functionality

  • getFilterAlertConfig GraphQL field has been removed on HumioMetadata datatype.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The Humio-Usage package has been deprecated and scheduled for removal in version 1.189 LTS.

  • The color field on the Role type has been marked as deprecated (will be removed in version 1.195).

  • The setConsideredAliveUntil and setConsideredAliveFor GraphQL mutations are deprecated and will be removed in 1.195.

  • The lastScheduledSearch field from the ScheduledSearch datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to the ScheduledSearch datatype to replace lastScheduledSearch.

  • The EXTRA_KAFKA_CONFIGS_FILE configuration variable has been deprecated and planned to be removed no earlier than version 1.225.0. For more information, see RN Issue.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Storage

    • When uploading to Bucket Storage, always use the first ownerHost to do the upload. This is a preparatory change to allow later optimization.

    • The S3 SDK retry logic has been broadened:

      • LogScale will now do retries for bucket storage operations on a much broader range of exceptions (SDKException).

      • Segment uploads that fail after the SDK call will no longer be retried immediately, but will still be re-queued.

      • Uploads of global snapshots and uploaded files will still be retried implicitly, and the retry log lines now specify which type of upload is initiating it.

  • Configuration

    • Multi-cluster searches will now have a warning attached when submission has failed for 10 minutes or more, but continue to attempt submissions instead of stopping for the failing connection.

      As a consequence, the environment variable FEDERATED_SUBMISSION_TIMEOUT_MILLIS is no longer used.

  • Ingestion

    • When deleting a test case from a parser, and adding a new test case again without re-running tests, the new test will no longer have the test results of the previously removed test case.

  • Queries

    • The usage of noResultUntilDone query flag has been corrected. This flag was incorrectly unset which meant that needless computation was performed, for example in scheduled searches or subqueries defined by defineTable(). Additionally, partial results were also returned to the clients, which is not the intended behavior when noResultUntilDone is used.

Upgrades

Changes that may occur or be required during an upgrade.

  • Installation and Deployment

    • The bundled JDK has been upgraded to version 24.0.1.

New features and improvements

  • Security

    • Asset sharing is now available for dashboards, triggers, actions, saved queries, scheduled PDF reports, and files. This means that:

      • It is now possible to grant permissions to users and groups for these assets at the individual asset level, so that others may collaborate on tasks involving these assets even though they don't have permission to edit or delete all of that type of asset in the view.

      • Any user who has permissions to the asset can grant up to the same permissions as they have to another user who has read permissions in the view.

      • Users who have the Change user access permission or Manage users permission can add users or groups who did not previously have access to assets in the view to a particular asset and grant them permissions.

      For more information about the general concept of asset permissions, see Asset permissions.

      For information about granting permissions for each of the supported asset types, see:

    • Users can now successfully add roles to users or groups on the repository permissions page when they have the Change user access permission. Previously, these users would encounter an error message stating roles could not be loaded.

    • The view level permission Query model for persistent queries has been renamed to Query ownership for persistent queries.

  • Installation and Deployment

    • LogScale is now available in an image based on Alpine Linux ARM. The image is tagged as humio/humio-core:1.189.1--arm64.

    • The HUMIO_NON_HEAP_SIZE_MB launcher variable now accounts for off-heap memory. Example: if you have 1 CPU core resulting in a reservation of 250MB for off-heap memory, 4GB RAM and have set HUMIO_NON_HEAP_SIZE_MB=500, the launcher will now reserve 3.25GB for the heap, and 250MB for off-heap, leaving 500MB free. Previously, LogScale would reserve 3.5GB for the heap, and 250MB for the off-heap, leaving 250MB free.

  • Administration and Management

    • A new internal metric data-ingester-parser-errors is now available in the humio-metrics repository to provide error tracking at parser level. Similar to existing data-ingester-errors, it tracks errors per parser per repository (versus only per repository).

  • User Interface

    • Added a failureOrigin field to all logs in the humio-activity repository for filter and aggregate alerts as well as scheduled searches, where status=Failure. The value of the new field can be either System or User, and indicates a best guess as to whether this failure is due to a system error or due to a user error, like for example errors in the query.

    • Improved automatic indentation insertion in the Query Editor in bracket contexts. For example:

      logscale Syntax
      groupBy( x, function=[ ] )

      will now auto-indent on newline insertions to

      :
      logscale Syntax
      groupBy( x, function=[
            ] )
    • The Query model label has been renamed to Query ownership. This change applies to the current query model UI sections in triggers, packages and shared dashboards.

    • The Y-axis in the Time Chart widget has an added space before the suffix for all formats except Metric in the Format Value property.

  • Automation and Alerts

    • The Trigger properties panel has some layout changes:

      • Section General renamed to General properties

      • Section Query renamed to Configuration

      • Section Actions moved above the Advanced settings section — now only visible when the trigger type is selected

      • Throttling moved to Configuration section

      • Trigger panel title changed

  • Storage

    • LogScale now supports Azure bucket storage with account key-based authentication.

      For more information, see Azure Bucket Storage.

  • GraphQL API

    • A new segment() GraphQL query is available. It provides access to information about a single segment specified by its identifier. This query is not a quick lookup and should be used only for troubleshooting or to help with data recovery. It requires the ManageCluster permission.

  • Configuration

    • The default value for the AUTOSHARDING_MAX configuration variable is now 128K (was 1k).

    • Enabling idempotence for the Kafka producer:

      • Set enable.idempotence=true for the global producer. This can't be overridden and is required to avoid the risk of message reordering in Kafka.

      • Set enable.idempotence=true for the ingest queue producer. This can be overridden using the KAFKA_INGEST_QUEUE_PRODUCER_ configuration variable, by adding the _ENABLE_IDEMPOTENCE suffix as the Kafka producer configuration option.

      While enabling the above configuration is not required for LogScale to work, it is however advisable in order to prevent reordering of messages and to reduce the frequency of duplicates in the ingest queue.

    • Introduced a new environment variable QUERY_COORDINATOR_EXECUTOR_CORES that determines the size of the thread pool used by the query coordinator for heavy query related operations, such as merging results from workers. This makes query coordination more resilient when running queries with large and expensive states.

  • Ingestion

  • Log Collector

    • Introducing labels. Labels are key-value pairs defined in a Log Collector's local Fleet Management configuration. Label values can be dynamically set using environment variables. When Log Collectors connect to LogScale/NG-SIEM, they transmit their labels to the instance managing the fleet. The labels enable:

      • Grouping collectors

      • Searching across collectors

      • Configuring collectors based on shared characteristics

      For example, a fleet management group defined as labels.service=web includes all collectors with label name: service and label value: web.

      This grouping allows administrators to create and apply reusable configurations specifically tailored to collectors sharing the same service type, streamlining fleet management and maintenance.

      For more information, see Fleet Management (fleetManagement).

    • Replacing Custom Install Legacy Fleet Management configuration snippet with supported enrollment mode localConfig.

  • Queries

    • Added LogScale Multi-Cluster Search query handover support:

      • Enables automatic reconnection and continued polling of downstream remote clusters

      • Current limitation: local connection handovers are not supported, meaning that:

        • Progress on local connections will be lost after handover

        • Queries to local connections will be resubmitted, resulting in a potential temporary loss of progress.

  • Functions

    • Query functions using files will now report warnings for missing files or other file errors when used in parsers.

      For more information, see Errors, Validation Checks, and Warnings.

    • The SortNewDatastructure feature flag is now enabled by default in Self-Hosted environments.

    • The ioc:lookup() query function now emits warnings in parsers when there are issues with the IOC service, instead of throwing an error. Errors are still thrown during query execution in case of errors.

      For more information, see Parser Behavior with Missing Database.

Fixed in this release

  • Installation and Deployment

    • The java.logging module has now been included in the bundled JDK. This dependency was erroneously missing and was throwing a NoClassDefFoundErrors error.

  • Administration and Management

    • A 401 Unauthorized authentication error was issued across all views and repositories for all users during file export, despite the token being valid. This issue has been fixed so that the authentication process has now been corrected and the file export functionality now works as expected with valid tokens.

    • In Multi-Cluster Search environments, queries could fail to start when attempting to fetch tables. This was caused by the worker cluster incorrectly reporting that the table already existed due to local filesystem/cache of the specific node handling the request, while the table coordinator node (where tables should be fetched from) did not actually have the table. With this fix, LogScale now first checks the availability of the table on the table coordinator node rather than checking on the local node, thus ensuring queries start correctly.

  • Falcon Data Replicator

    • A configuration issue prevented proper FDR publishing to Global Database. This issue affected job scheduling and might cause incorrect node allocation for FDR ingestion (for example, ingestion scheduled on more or fewer nodes than specified).

    • Fixed an issue where the check for which nodes should run an FDR feed didn't take node capabilities into account, potentially causing less nodes to actually run the feed.

  • User Interface

    • The Export file as CSV option would fail or would yield an empty file when one of the exported fields is a tag field. This issue has now been fixed.

    • Fixed an issue where auto-completion for field names in the Query editor would sometimes be missing.

    • Links to the documentation in the LogScale UI have been fixed to point to the correct pages instead of the library homepage.

    • Fixed an issue where clicking Scroll to load more in the top banner of the Event list would not update the view if the event list itself was paused.

  • Automation and Alerts

    • In rare cases, the information about the execution of filter and aggregate alerts could fail to be saved, potentially resulting in duplicate alerts. This issue has now been fixed.

    • After a digest reassignment, aggregate alerts could use a partial query result and report a warning about ingest delay rather than wait for the new digester to catch up. This issue has now been fixed.

    • Large query results (more than 1GB) for alerts could cause the query to crash. This issue has been fixed to now handle large alert datasets.

  • Storage

    • An invalid bucket/region would not show the appropriate error message when trying to configure archiving. This issue has now been fixed.

    • LogScale no longer attempts to download MaxMind files when there is insufficient disk space.

    • Fixed a feature flag roll out issue on clusters where individual users or organizations were previously opted into the feature.

      Important

      Required Action:

      • If you previously disabled rolled-out features via API, you must reapply these opt-outs

      • This is necessary due to changes in how opt-outs are represented in Global Database.

    • Resolved an issue that could cause a Resetting minimum offset due to truncation of the ingest queue warning message.

    • A very rare race condition could cause global transactions to appear to have succeeded when they actually didn't. This issue has now been fixed.

    • An issue has been fixed that could cause unnecessary delays in uploading files to Bucket Storage.

  • API

  • Configuration

    • Changes to the LookupTableSyncAwaitSeconds dynamic configuration were not reflected until the next server restart. This issue has been fixed so that changes in this configuration's value are now reflected immediately.

  • Dashboards and Widgets

    • The Time Chart tooltip legend could show unsorted values on query result update. This issue has now been fixed so that the list of top scores is now sorted.

  • Queries

    • When multiple events have the same timestamp, they are sorted by ID, which could cause an unstable order as well as internal errors for a few queries, due to violated assumptions. This issue has now been fixed.

    • Fixed an issue where a query using a lookup file might fail to start since query dependencies were not propagated in time to query workers. Such a query would be stopped with a Failed to load file or table. Try again shortly message.

    • Fixed an issue where query routing inside the cluster relied on original authentication from the client rather than internal authentication. This could lead to a situation where a user could submit a query, but was unable to then poll it.

    • ClusterHostAliveStats in field class could drop logs in case of liveness changes occurring within one second. This issue has been fixed to now include changes that occur less than one second apart.

    • Transferring tables between cluster nodes (either defined using defineTable() or from Lookup Files) could lead to thread starvation and node crashes. This issue has now been fixed.

    • If a query were to hit an internal error, such as failure to distribute tables, polling such a query would result in a 404 Not Found error. This issue has been fixed so that the correct 5** error is now propagated to the client.

    • Fixed race condition in LogScale Multi-Cluster Search. Previously, queries initiated simultaneously with a new connection addition to the multi-cluster view could exclude the new connection for the query. This synchronization issue has been resolved.

    • Fixed a race condition that could occur when states were merged in Query Coordination during the query handover process. This could result in corrupted query state or failed query handover.

    • Fixed an issue where a query might be marked as "cancelled" but not "done" when exceptions occurred during result calculation failures, such as RPC request failures.

  • Functions

    • The readFile() function has been fixed to now correctly emit warnings that might occur from loading the files requested.

    • In case of invalid input containing unescaped = characters in the parseCEF() function, the entire query execution or parser execution would fail. This issue has been fixed so that parseCEF() now properly recovers from the invalid input and adds an @error field to the event.

  • Other

    • Fixed an issue that could cause globally enabled features to appear to be disabled for individual organizations.

Improvement

  • Security

    • Improved permission validation: the Create Role button is now disabled for users who lack sufficient permissions to complete the role creation process. This prevents users from starting a workflow that would ultimately fail, saving time and reducing frustration. Previously, users could begin creating a role only to encounter an error at the final step due to insufficient permissions.

  • Falcon Data Replicator

    • The FDR logging has been improved by adding some of the SQS metadata fields within the activity log. The metadata fields that are now included in the logs are:

      • Sent timestamp

      • Approximate receive count

      • Approximate first receive timestamp

  • User Interface

  • Storage

    • Improve the response time when there's a large number of datasources for:

      • GraphQL calls fetching repository.datasources field

      • api/v1/dataspaces or api/v1/repositories endpoints

    • Made few minor adjustments to the global framework to avoid the possibility of bugs. These changes are not expected to impact the current behavior.

    • Improve the memory estimate of multi-cluster searches to make them more accurately reflect the real usage.

    • Heap memory estimation for digesters has been adjusted:

      • Reduced estimated heap memory requirement from 5MB to 1MB per datasource.

      • No impact on runtime behavior

      • Produced warning messages via the DigesterHeapSizeEstimateLogging if the estimated memory requirements are not met.

  • Configuration

  • Queries

    • Implemented a change about how queries track segment merging, which should eliminate edge cases where queries miss data due to merges.

    • Queries that combine different text searches with different tag filters now have an improved performance due to reduced volume of data scanned. For example, this change would improve the performance of a query like:

      logscale
      #event=ConnectIP4 OR (#event=ReceiveAcceptIP4 AND RemoteAddressIP4=12.34.56.78
  • Functions

    • The groupBy() function now displays a more descriptive error message when the maximum limit is exceeded, specifying the maximum allowed limit for your environment.

    • The parseCEF() query function has an improved output message in case of incorrect input conditions.

  • Packages

    • Improving error messages when installing a YAML template file (individually or through a package), where $schema in the file is misconfigured.