Falcon LogScale 1.177.1 LTS (2025-03-19)

Version?Type?Release Date?Availability?End of SupportSecurity UpdatesUpgrades From?Downgrades To?Config. Changes?
1.177.1LTS2025-03-19

Cloud

On-Prem

2026-03-31Yes1.150.01.171.1No
Hide file hashes

Show file hashes

Download

Bug fixes and updates.

Breaking Changes

The following items create a breaking change in the behavior, response or operation of this release.

  • Automation and Alerts

    • Important Notice: Downgrade Considerations

      Enhancements to Aggregate alerts in version 1.176 include additional state tracking for errors and warnings. While this is an improvement, it does require attention if you need to downgrade to an earlier version.

      Potential Impact:

      If you downgrade from 1.176 or above to 1.175 or below, you may encounter errors related to Aggregate Alerts, causing Aggregate Alerts to not run to completion.

      Resolution Steps:

      After downgrading, if you encounter errors containing Error message and error in phase must either both be set or not set, do the following:

      1. Identify affected Aggregate Alerts by executing the following GraphQL query:

        graphql
        query q1 {
          searchDomains {
            name    
            aggregateAlerts {id, lastError, lastWarnings}
          }
        }

        Document the IDs of any affected alerts having warnings and no errors set.

      2. Apply the resolution – for each identified alert with warnings (optionally and/or errors), apply this GraphQL mutation, replacing INSERT with your actual view name and alert ID:

        graphql
        mutation m1 {
          clearErrorOnAggregateAlert(input:{viewName:"INSERT",id:"INSERT"}) {id}
        }

        Keep track of modified alert IDs for future reference.

      3. Verify the resolution – confirm that the system returns to normal operation, and monitor for any additional error messages using a LogScale query and/or alert, such as:

        logscale
        #kind=logs
        class="c.h.c.Context"
        "Error message and error in phase must either both be set or not set"

      These steps will reset the Aggregate Alerts and restore the system to normal operation.

Removed

Items that have been removed as of this release.

Configuration

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The color field on the Role type has been marked as deprecated (will be removed in version 1.195).

  • The storage task of the GraphQL NodeTaskEnum is deprecated and scheduled to be removed in version 1.185. This affects the following items:

  • LogScale is deprecating free-text searches that occur after the first aggregate function in a query. These searches likely did not and will not work as expected. Starting with version 1.189.0, this functionality will no longer be available. A free-text search after the first aggregate function refers to any text filter that is not specific to a field and appears after the query's first aggregate function. For example, this syntax is deprecated:

    logscale Syntax
    "Lorem ipsum dolor" 
    | tail(200)         
    | "sit amet, consectetur"

    Some uses of the wildcard() function, particularly those that do not specify a field argument are also free-text-searches and therefore are deprecated as well. Regex literals that are not particular to a field, for example /(abra|kadabra)/ are also free-text-searches and are thus also deprecated after the first aggregate function.

    To work around this issue, you can:

    • Move the free-text search in front of the first aggregate function.

    • Search specifically in the @rawstring field.

    If you know the field that contains the value you're searching for, it's best to search that particular field. The field may have been added by either the log shipper or the parser, and the information might not appear in the @rawstring field.

    Free-text searches before the first aggregate function continue to work as expected since they are not deprecated. Field-specific text searches work as expected as well: for example, myField=/(abra|kadabra)/ continue to work also after the first aggregate function.

  • The use of the event functions eventInternals(), eventFieldCount(), and eventSize() after the first aggregate function is deprecated. For example:

    Invalid Example for Demonstration - DO NOT USE
    logscale
    eventSize() | tail(200) | eventInternals()

    Usage of these functions after the first aggregate function is deprecated because they work on the original events, which are not available after the first aggregate function.

    Using these functions after the first aggregate function will be made unavailable in version 1.189.0 and onwards.

    These functions will continue to work before the first aggregate function, for example:

    logscale
    eventSize() | tail(200)
  • The removeLimit() GraphQL mutation is being deprecated and replaced by the new mutation removeLimitWithId().

  • The lastScheduledSearch field from the ScheduledSearch datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to the ScheduledSearch datatype to replace lastScheduledSearch.

  • The EXTRA_KAFKA_CONFIGS_FILE configuration variable has been deprecated and planned to be removed no earlier than version 1.225.0. For more information, see RN Issue.

Upgrades

Changes that may occur or be required during an upgrade.

  • Installation and Deployment

    • The JDK included in container deployments has been upgraded to 23.0.2.

New features and improvements

  • Security

  • Administration and Management

    • The Usage page now uses the ingestAfterFieldRemovalSize metric for visualizing Average ingest per day. It's still possible to query the humio-usage repository for the legacy segmentWriteBytes metric as well as ingestAfterFieldRemovalSize.

  • User Interface

    • It is now possible to opt for individual widget time selections when creating scheduled reports.

    • It is now possible to import a Field Aliasing schema from a YAML template. The option is available from the + New schema button when creating field aliasing schemas.

      For more information, see Configuring Field Aliasing.

    • It is now possible to filter by source field, alias field and description when creating field aliases.

      For more information, see Configuring Field Aliasing.

    • The available actions for managing field aliasing schemas have been reorganized in a renewed layout.

      For more information, see Managing Field Aliasing.

    • Field Aliasing schemas now require unique names. If you create a schema with a name that has been utilized before, you'll be prompted to give the schema a different name.

    • The query editor warnings are now also displayed as runtime warnings. As a result, new warnings for some queries might be displayed. For example, queries that use experimental features will now show warnings. These warnings may trigger notifications for alerts and scheduled searches that use features with associated warnings. However, these queries should continue to run normally. Other hints and information in the query editor remain unchanged.

    • A new IOC Lookup field interaction is now available for IP fields (for example, ip_address). Invoking this interaction will generate a new query by calling the ioc:lookup() query function. The new query will use the name of the selected IP field as the field argument for the function. For example:

      logscale Syntax
      ioc:lookup(field=[actor.ip], type="ip_address", confidenceThreshold="unverified", strict=true)

      For more information, see Field Interactions.

  • Automation and Alerts

    • Alerts and Scheduled searches now show additional warning types in the UI. Before, these warnings only appeared in the humio-activity logs.

  • GraphQL API

    • The refreshClusterManagementStats() GraphQL mutation has been added. When developing scripts to automate the unregistration of multiple evicted nodes at a time, this mutation can be called to validate that the node being unregistered can be terminated without risking data loss. As the mutation is expensive, it should not be called frequently.

    • A new, optional argument has been added to the restoreDeletedSearchDomain() GraphQL mutation. The purpose is to be able to restore a deleted search domain even though its limit has also been deleted, by specifying a new limit to use instead.

    • When fetching environment variables through GraphQL, most of the configuration variables were redacted. The list of non-secret environment variables that should not be redacted has now been updated with additional variables.

    • The s3ResetArchiving() GraphQL mutation now supports resetting cluster wide archiving on a repository through a new archivalKind field.

    • The new totalSearchDomains field has been added to the user.userOrGroupSearchDomainRoles() GraphQL query. This field indicates the amount of unique search domains in the result.

    • A new token() GraphQL query now allows fetching a token based on its ID. Previously, you could only list tokens and filter by name.

  • Storage

    • To reduce load on global database, datasources now take longer to enter idle state when they stop receiving data.

  • Configuration

    • The new METRIC_RETENTION_IN_DAYS environment variable now allows users to configure the humio-metrics repository retention.

      For more information, see METRIC_RETENTION_IN_DAYS.

    • A new configuration variable MINISEGMENT_PREMERGE_MIN_FILES_WHEN_IDLE has been added. This sets a lower limit (defaults to 4) on how many minisegments must be present before merging into larger minisegments, than those for non-idle datasources, where the limit is still controlled by MINISEGMENT_PREMERGE_MIN_FILES (defaults to 12). The merging is intended to reduce the global snapshot size.

    • LogScale now provides environment variables to configure individual Kafka clients. The new environment variables have the following prefixes:

      • KAFKA_ADMIN

      • KAFKA_CHATTER_CONSUMER

      • KAFKA_CHATTER_PRODUCER

      • KAFKA_GLOBAL_CONSUMER

      • KAFKA_GLOBAL_PRODUCER

      • KAFKA_INGEST_QUEUE_CONSUMER

      • KAFKA_INGEST_QUEUE_PRODUCER

      In addition, KAFKA_COMMON can be utilized to pass the configuration to all clients; however, settings configured using the client-specific prefixes have precedence if the setting is present with both prefixes.

      Kafka configuration options, such as request.timeout.ms can be passed with these prefixes using a simple rewrite:

      1. Uppercase the option name. Example: REQUEST.TIMEOUT.MS

      2. Replace . with _. Example: REQUEST_TIMEOUT_MS.

      3. Apply the prefix for the target client. Example: KAFKA_INGEST_QUEUE_CONSUMER_REQUEST_TIMEOUT_MS.

      4. Pass this as an environment variable to LogScale on boot. Example: KAFKA_INGEST_QUEUE_CONSUMER_REQUEST_TIMEOUT_MS=30000.

      As a consequence, EXTRA_KAFKA_CONFIGS_FILE has been deprecated in favor of these new environment variables. This feature will be removed no earlier than version 1.225.0. The configuration passed via EXTRA_KAFKA_CONFIGS_FILE can be moved into the new environment variables using the procedure outlined above, while using the KAFKA_COMMON_ prefix.

      After the EXTRA_KAFKA_CONFIGS_FILE removal, LogScale will not start if this variable is set. This behavior will help users recognize that they need to update their configuration, as described in this release note.

    • The enable.idempotence feature for Kafka producers, which is configurable through the EXTRA_KAFKA_CONFIGS_FILE variable, has been set to false by default due to stability issues reported in certain environments.

  • Dashboards and Widgets

    • The Time Chart widget has new tooltip options:

      • The widget's tooltip now shows only the top 5 series and the hovered series.

      • The ⇧ key expands the tooltip and show all series.

      • The CTRL key activates both show full legend labels and show unformatted values features simultaneously.

      • Tooltip values are now aligned so that variables are left-aligned, and values are right-aligned.

    • It is now possible to configure series colors and names across dashboard widgets. Series configured on the widget level will overwrite dashboard level series.

      For more information, see Edit Dashboards.

    • The Table widget now supports multiple Markdown-formatted URLs within a single cell, so that it renders multiple clickable links separated by line breaks, improving upon the previous single-URL display.

    • It is now possible to normalize data for a stacked Bar Chart. In the styling properties of the widget:

      1. Set Type to Stacked

      2. Under the Value axis section, set Type to Linear

      3. Select the Normalize checkbox that is being displayed.

    • The Bar Chart widget now offers new raw and abbreviated options for formatting numerical values.

    • Row selection is now available in the Table widget, on the Search page only: you can now select rows from a table and copy them to the clipboard.

    • A new option to format the numerical values for the Pie Chart and Heat Map widgets is now available.

    • A new option to select value formatting for Time Chart is now available. The resizing behavior of the chart has also been adjusted.

    • New settings for formatting numerical values in the Scatter Chart are now available.

  • Ingestion

    • Clicking Run tests on the parser code page now produces events that are more similar to what an ingested event would look like in certain edge cases.

    • The details panel for test cases on the Parsers page will now link to relevant error documentation where such documentation exists.

  • Log Collector

    • LogScale Collector now handles a longer list of available downloads. Older versions which have reached end-of-life are marked as such.

  • Queries

    • Execution time is now included in the activity logs for the queries' execution information.

    • Main queries now support retry polling subqueries that are for example being restarted or otherwise temporarily unavailable (as for defineTable() subqueries). This change is meant to address the Subquery not found on poll warning issue occurring when subqueries are being restarted.

  • Functions

    • Using the functions eventSize(), eventFieldCount(), and eventInternals() after an aggregator will now give a warning, indicating that no result will be returned.

    • Introducing the new query function array:sort(), which sorts the element of a given array using a given sort type and order. This function is similar to the sort() function, but works on the array elements of a single event instead of multiple events.

      For more information, see array:sort().

    • The var parameter of the array:filter() function is now optional and defaults to the name of the input array.

    • A new prefix parameter has been added to the kvParse()function. The parameter is an alias for the existing parameter as parameter.

    • The new query functions array:exists() and objectArray:exists() are now available. They are both used to filter events based on whether the given array contains an element that satisfies a given condition.

      For performance reasons, LogScale recommends using array:exists(), but it can be used for flat arrays only (not for nested arrays). For nested arrays (for example JSON structures), use objectArray:exists() instead.

      Both functions offer more flexibility compared to array:contains() in cases where, for example, you need to compare array elements with values from other fields.

    • Sequence functions for analyzing ordered event sequences are now available.

      • accumulate(). Apply cumulative aggregation; for example, running totals or averages.

      • slidingWindow(). Apply aggregation over a moving window of specified event count. Suitable for trend analysis of recent events.

      • slidingTimeWindow(). Apply aggregation over a moving window specified as a time span. Suitable for time-series analysis.

      • partition(). Split a sequence of events into multiple partitions and apply an aggregation on each partition. Suitable for grouped analyses like user sessions.

      • neighbor(). Access fields from preceding or succeeding events in a sequence. Suitable for comparing events in sequential data.

      Usage guidelines:

      • Sequence functions must be used after an aggregator function to establish an ordering. LogScale recommends using the sort() function before sequence functions to ensure meaningful event order.

      • Sequence functions differ from other aggregator functions in that they typically annotate events with the aggregation results.

      • Combine sequence-functions for a more complex analysis.

      For more information, see Sequence Query Functions.

    • The new query function base64Encode() is now available. The function allows the user to base64-encode a field, and output the results in another field. For instance, the string hello, world will encode as aGVsbG8sIHdvcmxk.

      Usage example: base64Encode(fieldName) will produce events with a field named _base64Encode, containing the encoded value of the fieldName field.

Fixed in this release

  • Installation and Deployment

    • Testing event forwarder connectivity would permanently consume a thread and TCP connection to the Kafka broker. This issue has now been fixed.

  • User Interface

    • Fixed an issue that only the first error for a field would be returned from the API and shown in the UI.

    • The error message used when LogScale fails to import a YAML template for an asset (dashboard, parser, etc.) has been changed because it didn't recognize the template schema.

    • Scheduled reports could assume the wrong execution time when generated with a delay with respect to the scheduled time. The issue has now been fixed so that the scheduled time is used, regardless of when the report is actually generated.

    • The event distribution chart toggle button has been removed from the Table tab on the Search view, as the event distribution chart does not apply for this tab.

  • Automation and Alerts

    • These issues on query warning handling have been fixed:

      • Filter and Aggregate alerts would sometimes wait too long on query warnings about missing data.

      • Filter alerts would stop retrying on query warnings about missing data completely, after having reached the timeout once.

      • Filter alerts would retry polling a finished static query due to query warnings about missing data, instead of restarting the query.

    • When viewing an Email action in the UI, the subject and body field would be swapped. If the action was saved from the UI showing them swapped, the fields would also be swapped on storage. The same would happen if testing the action from the UI, showing the fields swapped. This issue has now been fixed.

    • Listing actions on a trigger referencing a non-existing action would fail. This issue has been fixed.

  • GraphQL API

    • The searchDomainRoles GraphQL field on the Group datatype could fail if given a view ID for which the group did not have any role assignments. This issue has now been fixed.

  • Storage

    • An issue related to undersized-merging of existing segments has been fixed. Previously, this process could create segments spanning up to 15 days, even in repositories with shorter retention periods (such as 30 days). Now, the merging process adheres to the UndersizedMergingRetentionPercentage dynamic configuration. For example, in a repository with a 30-day retention period, the maximum span for undersized-merging output is now 6 days.

    • Fixing a race condition that could cause removal of the topOffsets field from segments earlier than intended, risking the loss of the most recent data during digest reassignment.

    • A slow background cleanup work could block digest from starting, which could in turn cause nodes to crash on digest reassignment in large clusters. This issue has now been fixed.

    • A bug that was introduced in version 1.173.0 has been fixed. This bug could cause a node to crash when hash filter files were deleted during digest processing.

  • Configuration

  • Dashboards and Widgets

    • Errors were occurring in dashboard queries when dashboard filters contained parameters that were only used within the filter itself and nowhere else in the query. This issue has now been fixed.

    • A series configuration for a widget's title and color would not take immediate effect when updated in the side panel. This issue has now been fixed.

    • Updating invalid input patterns for a parameter would not create the typed values on enter. This issue has been fixed.

    • Renaming the Id of a parameter inside a panel on the dashboard would make it jump to the top panel. This issue has now been fixed.

    • A Query Editor error in one of the widgets on a dashboard could result in an error on the Query Editor of a parameter. This issue has now been fixed.

  • Ingestion

    • When ingesting events with additional tags, such as when using humio structured endpoint, tags that were specified in the parser for removal were discounted from ingest accounting, but not removed from the event. This issue has now been fixed.

  • Log Collector

    • When computing group memberships in fleet management, a query timeout could result in collectors loosing their group memberships. This issue has now been fixed.

  • Queries

    • Queries did not restart when adding, changing, or removing view connections. This issue has now been fixed so that queries correctly restart at view connections updates.

    • The parsing of field values with large numbers (for example 92233720368547758) could in rare cases cause an integer overflow and turn to small negative values. This issue has now been fixed.

    • Queries would sometimes be incorrectly reused even though they had a warning attached. This would mean that a new query would get the same warnings instead of running a new search. This issue has now been fixed.

    • A warning about unresponsive nodes would remain attached to a query even though it was no longer relevant. This issue has now been fixed.

    • Quadratic time complexity in queries could significantly slow down processing, causing query submission failures. This issue has now been fixed.

    • A Query Scheduling issue has been fixed: queries that encountered restrictions or errors would continue to execute on individual segment blocks, even if the errors would cause the query to cancel.

    • Some regular expressions would continue to run even if the query was cancelled. This issue has now been fixed.

    • An internal file verification job might not start correctly, which in turn may block digest. This issue has now been fixed.

    • The query-millis metric wrongly counted the time spent waiting for CPU. This has been fixed so that the metric now measures the CPU time used by the query only.

    • A query might be started on an incorrect node in case of a mixed version cluster. This would lead to failure in polling the query. This issue has now been fixed.

  • Functions

    • Matching on multiple rows in mode=cidr missed some matching rows. This happened in cases where there are rows with different subnets that match on the same event.

      Example of the bug, using a file example.csv.

      column1column2
      1.2.3.4/25one
      1.2.3.4/24two
      1.2.3.4/24three

      For the query:

      logscale
      match(example.csv, field=column1, mode=cidr, nrows=3)

      an event with the field column1=1.2.3.10 would only match on the last two rows. This change fixes this issue so that all three rows will match on the event.

    • In some cases the parseLEEF() function could not parse the event if the devTimeFormat field did not match the corresponding devTime field. This issue has now been fixed.

    • A query would not return a result if the query function encountered a NaN value. This issue has now been fixed.

  • Packages

    • Live queries would not get restarted whenever a referenced saved query originating from a package was updated. For example, a live query like $myPackage:mySavedQuery() would not get restarted whenever the contents of mySavedQuery was updated on the package. This issue has now been fixed.

Improvement

  • Automation and Alerts

    • Handling of query warnings for Alerts and Scheduled searches has been improved:

      • Filter alerts, Aggregate alerts and Scheduled searches no longer restart or keep polling a query with a query warning that is permanent.

      • Filter and Aggregate alerts now tries restarting the query for a while, if it has a warning that does not automatically clear when no longer applicable.

  • Storage

    • The ingest reader loop now marks datasources as idle more quickly when Kafka ingest flow is below maximum capacity.

    • The performance of writes to the chatter topic has been enhanced. This improvement addresses previous potential degraded performance of ingest on clusters with numerous ingest queue partitions.

    • In an effort to reduce load on global, datasources are further delayed in being marked as idle when they receive no data.

    • The load on Global Database could be slightly reduced by removing some unnecessary messages that were being sent by mistake.

  • Queries

    • Queries producing more events via its aggregators than the events configured by AggregatorOutputRowLimit will now get cancelled and not produce an output. Previously, queries would only produce a log and continue to run when the limit was exceeded. This can happen for instance when nesting multiple groupBy() function calls with high cardinality results. This change is being introduced to protect the system against runaway queries that take up resources from the whole cluster.

    • Regular expressions compiled using the new LogScale Regular Expression Engine v2 are now cached to avoid compiling the same regex multiple times.

    • Error recovery messages have been improved in the Query Editor. LogScale now informs about any missing or excessive arguments in queries when using for example worldMap() and rename() functions.

  • Functions

    • The join()'s start and end parameters now have some updated error messaging to include absolute values.

  • Other

    • A timedOut field added to the request log now indicates whether the client received a 503 response.