Falcon LogScale 1.146.0 Preview (2024-07-09)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

JDK

Compatibility?

Config.

Changes?
1.146.0Preview2024-07-09

Cloud

Next StableNo1.11221-22No

Bug fixes and updates.

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The -XX:ActiveProcessorCount=n command-line option will be ignored if set. Users that need to configure the core count manually should set CORES=n environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.

      This change is scheduled for 1.148.0.

      For more information, see Configuring Available CPU Cores.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The following API endpoints are deprecated and marked for removal in 1.148.0:

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment

    • GET /api/v1/clusterconfig/kafka-queues/partition-assignment

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults

    The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.

  • The server.tar.gz release artifact has been deprecated. Users should switch to the OS/architecture-specific server-linux_x64.tar.gz or server-alpine_x64.tar.gz, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.

    We are making this change for the following reasons:

    • By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.

    • Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.

    • By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.

    The last release where server.tar.gz artifact is included will be 1.154.0.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

  • The HUMIO_JVM_ARGS environment variable in the LogScale Launcher Script script will be removed in 1.154.0.

    The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Override garbage collection configuration within the launcher script.

  • The lastScheduledSearch field from the ScheduledSearch datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to the ScheduledSearch datatype to replace lastScheduledSearch.

New features and improvements

  • Automation and Alerts

    • A maximum limit of 1 week has been added on the throttle period for Filter Alerts and Standard Alerts. Any existing alert with a higher throttle time will continue to run, but when edited, lowering the throttle time to 1 week at most will be required.

  • Storage

    • An alternative S3 client is now available and enabled by default. It handles file uploads more efficiently, by setting the Content-MD5 header during upload thus allowing S3 to perform file validation instead of having LogScale do it via post-upload validation steps. This form of validation should work for all uploads, including when server-side encryption is enabled. The new S3 client only supports this validation mode, so setting the following variables will have no effect:

      In case of issues, the S3 client can be disabled by setting USE_AWS_SDK=false, which will set LogScale back to the previous default client. Should you need to do this, please reach out to Support to have the issue addressed, because the previous client will be deprecated and removed eventually.

  • GraphQL API

    • The getFileContent() and newFile() GraphQL endpoint responses will change for empty files. The return type is still UploadedFileSnapshot!, but the lines field will be changed to return [] when the file is empty. Previously, the return value was a list containing an empty list [[]]. This change applies both for empty files, and when the provided filter string doesn't match any rows in the file.

  • API

Fixed in this release

  • UI Changes

    • The event histogram would not adhere to the timezone selected for the query.

  • GraphQL API

    • The getFileContent() GraphQL endpoint will now return an UploadedFileSnapshot! datatype with the field totalLinesCount: 0 when a file has no matches for a given filter string. Previously it would return the total number of lines in the file.

  • API

  • Functions

    • Parsing the empty string as a number could lead to errors causing the query to fail (in formatTime() function, for example). This issue has now been fixed.