Falcon LogScale 1.153.0 GA (2024-08-27)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.153.0GA2024-08-27

Cloud

2025-09-30No1.112No

Bug fixes and updates.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The server.tar.gz release artifact has been deprecated. Users should switch to the OS/architecture-specific server-linux_x64.tar.gz or server-alpine_x64.tar.gz, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.

    We are making this change for the following reasons:

    • By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.

    • Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.

    • By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.

    The last release where server.tar.gz artifact is included will be 1.154.0.

  • The HUMIO_JVM_ARGS environment variable in the LogScale Launcher Script script will be removed in 1.154.0.

    The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.

  • The lastScheduledSearch field from the ScheduledSearch datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to the ScheduledSearch datatype to replace lastScheduledSearch.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Automation and Alerts

    • Aggregate and filter alert types now both display an Error (red) status if starting the alert query times out after 1 minute.

      For more information on alert statuses, see Monitoring Alerts.

  • Configuration

    • Autoshards no longer respond to ingest delay by default, and now support round-robin instead.

  • Functions

    • Prior to LogScale v1.147, the array:length() function accepted a value in the array argument that did not contain brackets [ ] so that array:length("field") would always produce the result 0 (since there was no field named field). The function has now been updated to properly throw an exception if given a non-array field name in the array argument. Therefore, the function now requires the given array name to have [ ] brackets, since it only works on array fields.

New features and improvements

  • UI Changes

    • UI workflow updates have been made in the Groups page for managing permissions and roles.

      For more information, see Manage Groups.

  • Automation and Alerts

    • The following adjustments have been made for Scheduled PDF Reports:

      • If the feature is disabled for the cluster, then the Scheduled reports menu item under Automation will not show.

      • If the feature is disabled or the render service is in an error state, users who are granted with the ChangeScheduledReport permission and try to access, will be presented with a banner on the Scheduled reports overview page.

      • The permissions overview in the UI now informs that the feature must be enabled and configured correctly for the cluster, in order for the ChangeScheduledReport permission to have any effect.

  • Storage

    • For better efficiency, more than one object is now deleted from Bucket Storage per request to S3 in order to reduce the number of requests to S3.

  • GraphQL API

    • The getFileContent()GraphQL query will now filter CSV file rows case insensitively and allow partial text matches. This happens when filterString input argument is provided. This makes it possible to search for rows without knowing the full column values, and while ignoring the case.

    • The defaultTimeZone GraphQL field on the UserSettings GraphQL type no longer defaults to the organisation default time zone if the user has no default time zone set. To get the default organization time zone through the API, use the defaultTimeZone field on the OrganizationConfigs GraphQL type.

  • Configuration

    • Cluster-wide configuration of S3 Archiving is introduced, in addition to the existing repo-specific configurations. This feature allows the cluster admin to setup archiving to a (single) bucket for a subset of repositories on the cluster, fully independent of the S3 Archiving available to end users via the UI. This feature adds the following new configuration parameters:

      • S3_CLUSTERWIDE_ARCHIVING_ACCESSKEY (required)

      • S3_CLUSTERWIDE_ARCHIVING_SECRETKEY (required)

      • S3_CLUSTERWIDE_ARCHIVING_REGION (required)

      • S3_CLUSTERWIDE_ARCHIVING_BUCKET (required)

      • S3_CLUSTERWIDE_ARCHIVING_PREFIX (defaults to empty string)

      • S3_CLUSTERWIDE_ARCHIVING_PATH_STYLE_ACCESS(default is false)

      • S3_CLUSTERWIDE_ARCHIVING_KMS_KEY_ARN

      • S3_CLUSTERWIDE_ARCHIVING_ENDPOINT_BASE

      • S3_CLUSTERWIDE_ARCHIVING_WORKERCOUNT (default is cores/4)

      • S3_CLUSTERWIDE_ARCHIVING_USE_HTTP_PROXY (default is false)

      • S3_CLUSTERWIDE_ARCHIVING_IBM_COMPAT (default is false)

      Most of these configuration variables work like they do for S3 Archiving, except that the region/bucket is selected here via configuration, and not dynamically by the end users, and also that the authentication is via explicit accesskey and secret, and not via IAM roles or any other means.

      The following dynamic configurations are added for this feature:

      • S3ArchivingClusterWideDisabled (defaults to false when not set) — allows temporarily pausing the archiving in case of issues triggered by, for example, the traffic this creates.

      • S3ArchivingClusterWideEndAt and S3ArchivingClusterWideStartFrom — timestamps in milliseconds of the "cut" that selects segment files and events in them to include. When these configuration variables are unset (which is the default) the effect is to not filter by time.

      • S3ArchivingClusterWideRegexForRepoName (defaults to not match if not set) — the repository name regex must be set in order to enable the feature. When set, all repositories that have a name that matches the regex (unanchored) will be archived using the cluster wide configuration from this variable.

  • Ingestion

    • On the Code page accessible from the Parsers menu when writing a new parser, the following validation rules have been added globally:

      • Arrays must be contiguous and must have a field with index 0. For instance, myArray[0] := "some value"

      • Fields that are prefixed with # must be configured to be tagged (to avoid falsely tagged fields).

      An error is displayed on the parser Code page if the rules above are violated. This error will not appear during actual parsing.

      For more information, see Creating a New Parser.

Fixed in this release

  • UI Changes

    • A race condition in LogScale Multi-Cluster Search has been fixed: a done query with an incomplete result could be overwritten, causing the query to never complete.

    • The Export to file dialog used when Exporting Data has been fixed as CSV fields input would in some cases not be populated with all fields.

  • Storage

    • Throttling for bucket uploads/downloads has been fixed as it could cause unintentionally high number of concurrent uploads or downloads, to the point of exceeding the pool of connections.

    • Segments could be considered under-replicated for a long time leading to events being retained in Kafka for extended periods. This wrong behavior has now been fixed.

  • Functions

    • The query backtracking limit would wrongly apply to the total number of events, rather than how many times individual events are passed through the query pipeline. This issue has now been fixed.

Known Issues

  • Queries

    • A known issue in the implementation of the match() function when using cidr option in the mode parameter, could cause a reduction in performance for the query, and block other queries from executing.

Improvement

  • UI Changes

    • The performance of the query editor has been improved, especially when working with large query results.

  • Ingestion

    • The input validation on Split by AWS records preprocessing when Set up a New Ingest Feed has been simplified: it will still validate that the incoming file is a single JSON object (and not, for example, multiple newline-delimited JSON objectx), but the object may or may not contain a Records array. This resolves an ingest feed issue for CloudTrail with log file integrity enabled. In such cases, the emitted digest files (that does not have the Records array) would halt the ingest feed. These digest files are now ignored.

      For more background information, see this related release note.