Falcon LogScale 1.142.0 Preview (2024-06-11)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

JDK

Compatibility?

Config.

Changes?
1.142.0Preview2024-06-11

Cloud

On-Prem

2025-07-31No1.11217-22No

Bug fixes and updates.

Breaking Changes

The following items create a breaking change in the behavior, response or operation of this release.

  • Functions

    • The any argument in sort() has been removed. Queries where any is explicitly set will be rejected. Please change the argument to either number, hex or string, depending on which option is the best fit for the data your query operates on.

    • The following changes have been made to sort():

      • It will no longer try to guess the type of the field values and instead default to number.

      • The number and hex options have been redefined to be total orders: values of the given type are sorted according to their natural order and those that could not be understood as the given type are sorted lexicographically. For instance, sorting the values 10, 100, 20, bcd, cde, abc in an ascending order with number will be rendered as: 10, 20, 100, abc, bcd, cde

Advance Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The -XX:ActiveProcessorCount=n command-line option will be ignored if set. Users that need to configure the core count manually should set CORES=n environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.

      This change is scheduled for 1.148.0.

      For more information, see Configuring Available CPU Cores.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The following API endpoints are deprecated and marked for removal in 1.148.0:

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment

    • GET /api/v1/clusterconfig/kafka-queues/partition-assignment

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults

    The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.

  • The server.tar.gz release artifact has been deprecated. Users should switch to the OS/architecture-specific server-linux_x64.tar.gz or server-alpine_x64.tar.gz, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.

    We are making this change for the following reasons:

    • By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.

    • Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.

    • By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.

    The last release where server.tar.gz artifact is included will be 1.154.0.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

  • The HUMIO_JVM_ARGS environment variable in the LogScale Launcher Script script will be removed in 1.154.0.

    The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Override garbage collection configuration within the launcher script.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Storage

    • When a digest leader exceeds the PRIMARY_STORAGE_MAX_FILL_PERCENTAGE, instead of pausing by releasing leadership of all partitions, it'll pause while holding on to leadership.

New features and improvements

  • Security

    • The new ManageViewConnections Organization Administration permission has been added. It grants access to:

      • List all views and repositories

      • Create views linked to any repository

      • Update Connections of any existing view.

  • Installation and Deployment

    • NUMA support for the Docker images is now enabled:

      • The launcher script has been updated to set -XX:+UseNUMA in the default HUMIO_JVM_PERFORMANCE_OPTS.

      • The Docker images have been updated to include libnuma.so.1, which allows the JDK to optimize for NUMA hardware.

  • Dashboards and Widgets

    • Widget-level time selection can now be adjusted when a dashboard is used in view mode. This change adds flexibility in working with time on the dashboard and allows for easy comparative analysis on the fly.

      For more information, see Widget Time Selector.

Fixed in this release

  • Storage

    • Pending merges of segments would contend with the verification of segments being transferred between nodes/bucket. This resulted in spuriously long transfer times, due to queueing of the verification step for the segment file. This issue has now been fixed.

  • Other

    • A fix has been made to reduce contention in file reading for queries, thus resulting in performance improvement.

Improvement

  • Storage

    • The amount of work required for the local segment verifier at boot of nodes has been reduced.