Falcon LogScale 1.174.0 GA (2025-02-04)

Version?Type?Release Date?Availability?End of SupportSecurity UpdatesUpgrades From?Downgrades To?Config. Changes?
1.174.0GA2025-02-04

Cloud

2026-03-31No1.150.01.157.0No

Available for download two days after release.

Bug fixes and updates.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The color field on the Role type has been marked as deprecated (will be removed in version 1.195).

  • The storage task of the GraphQL NodeTaskEnum is deprecated and scheduled to be removed in version 1.185. This affects the following items:

  • The lastScheduledSearch field from the ScheduledSearch datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to the ScheduledSearch datatype to replace lastScheduledSearch.

  • The EXTRA_KAFKA_CONFIGS_FILE configuration variable has been deprecated and planned to be removed no earlier than version 1.225.0. For more information, see RN Issue.

New features and improvements

  • GraphQL API

    • When fetching environment variables through GraphQL, most of the configuration variables were redacted. The list of non-secret environment variables that should not be redacted has now been updated with additional variables.

  • Configuration

    • A new configuration variable MINISEGMENT_PREMERGE_MIN_FILES_WHEN_IDLE has been added. This sets a lower limit (defaults to 4) on how many minisegments must be present before merging into larger minisegments, than those for non-idle datasources, where the limit is still controlled by MINISEGMENT_PREMERGE_MIN_FILES (defaults to 12). The merging is intended to reduce the global snapshot size.

  • Ingestion

    • The details panel for test cases on the Parsers page will now link to relevant error documentation where such documentation exists.

  • Dashboards and Widgets

    • The Bar Chart widget now offers new raw and abbreviated options for formatting numerical values.

  • Functions

    • Sequence functions for analyzing ordered event sequences are now available.

      • accumulate(). Apply cumulative aggregation; for example, running totals or averages.

      • slidingWindow(). Apply aggregation over a moving window of specified event count. Suitable for trend analysis of recent events.

      • slidingTimeWindow(). Apply aggregation over a moving window specified as a time span. Suitable for time-series analysis.

      • partition(). Split a sequence of events into multiple partitions and apply an aggregation on each partition. Suitable for grouped analyses like user sessions.

      • neighbor(). Access fields from preceding or succeeding events in a sequence. Suitable for comparing events in sequential data.

      Usage guidelines:

      • Sequence functions must be used after an aggregator function to establish an ordering. LogScale recommends using the sort() function before sequence functions to ensure meaningful event order.

      • Sequence functions differ from other aggregator functions in that they typically annotate events with the aggregation results.

      • Combine sequence-functions for a more complex analysis.

      For more information, see Sequence Query Functions.

Fixed in this release

  • User Interface

    • The error message used when LogScale fails to import a YAML template for an asset (dashboard, parser, etc.) has been changed because it didn't recognize the template schema.

    • The event distribution chart toggle button has been removed from the Table tab on the Search view, as the event distribution chart does not apply for this tab.

  • Automation and Alerts

    • These issues on query warning handling have been fixed:

      • Filter and Aggregate alerts would sometimes wait too long on query warnings about missing data.

      • Filter alerts would stop retrying on query warnings about missing data completely, after having reached the timeout once.

      • Filter alerts would retry polling a finished static query due to query warnings about missing data, instead of restarting the query.

  • Storage

    • Fixing a race condition that could cause removal of the topOffsets field from segments earlier than intended, risking the loss of the most recent data during digest reassignment.

  • Dashboards and Widgets

    • Errors were occurring in dashboard queries when dashboard filters contained parameters that were only used within the filter itself and nowhere else in the query. This issue has now been fixed.

    • A series configuration for a widget's title and color would not take immediate effect when updated in the side panel. This issue has now been fixed.

    • Updating invalid input patterns for a parameter would not create the typed values on enter. This issue has been fixed.

  • Queries

    • A warning about unresponsive nodes would remain attached to a query even though it was no longer relevant. This issue has now been fixed.

    • Quadratic time complexity in queries could significantly slow down processing, causing query submission failures. This issue has now been fixed.

    • A Query Scheduling issue has been fixed: queries that encountered restrictions or errors would continue to execute on individual segment blocks, even if the errors would cause the query to cancel.

    • Some regular expressions would continue to run even if the query was cancelled. This issue has now been fixed.

  • Functions

    • In some cases the parseLEEF() function could not parse the event if the devTimeFormat field did not match the corresponding devTime field. This issue has now been fixed.

    • A query would not return a result if the query function encountered a NaN value. This issue has now been fixed.

  • Packages

    • Live queries would not get restarted whenever a referenced saved query originating from a package was updated. For example, a live query like $myPackage:mySavedQuery() would not get restarted whenever the contents of mySavedQuery was updated on the package. This issue has now been fixed.

Improvement

  • Automation and Alerts

    • Handling of query warnings for Alerts and Scheduled searches has been improved:

      • Filter alerts, Aggregate alerts and Scheduled searches no longer restart or keep polling a query with a query warning that is permanent.

      • Filter and Aggregate alerts now tries restarting the query for a while, if it has a warning that does not automatically clear when no longer applicable.

  • Storage

    • The ingest reader loop now marks datasources as idle more quickly when Kafka ingest flow is below maximum capacity.

    • The performance of writes to the chatter topic has been enhanced. This improvement addresses previous potential degraded performance of ingest on clusters with numerous ingest queue partitions.

    • In an effort to reduce load on global, datasources are further delayed in being marked as idle when they receive no data.

  • Other

    • A timedOut field added to the request log now indicates whether the client received a 503 response.