Stable Release

Falcon LogScale 1.142.1 Stable (2024-07-03)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

JDK

Compatibility?

Config.

Changes?
1.142.1Stable2024-07-03

Cloud

On-Prem

2025-07-31Yes1.11217-22No
TAR ChecksumValue
MD51a5dd967685b998da46afaed3c0fe18c
SHA14b87496f773a8ac0c51e5b27f35de15475fc34fd
SHA256b2fc87e706d02f48694caaf422f2700f9d178f56afe06e35a006ae1b8524a844
SHA51262ed51ae91d7e4c2c9276a1473ae26303ba89f36dece7f7ffbbb09d169c52b219ef7f79a3886c60cb9163823c8564feda3b58bfec23cc25b9abf107fbc7308a5
Docker ImageIncluded JDKSHA256 Checksum
humio2204af3a13ac01a9278105b223bc61639b20c735439fc9a131d49ec240cd50bc26
humio-core22a3868201a659cccb6bf44e0aedc18de6789938ac1e500b49aebc9362ec106759
kafka2230bff675f267171b99046d68419429f3b78e0e258282feade9bae1d726100b92
zookeeper22cc49c209a4de0de071e0be5bba530c6f39b012b0183f444daf2b73ea56cae646

Download

Bug fixes and updates.

Breaking Changes

The following items create a breaking change in the behavior, response or operation of this release.

  • Functions

    • The limit parameter has been added to the rdns() function. It is controlled by dynamic configurations RdnsMaxLimit and RdnsDefaultLimit. This is a breaking change addition due to incidents caused by the large implicit limit used before.

      For more information, see rdns().

Advanced Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The -XX:ActiveProcessorCount=n command-line option will be ignored if set. Users that need to configure the core count manually should set CORES=n environmant variable instead. This will cause the launcher to configure both LogScale and the JVM properly.

      This change is scheduled for 1.148.0.

      For more information, see Configuring Available CPU Cores.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The following API endpoints are deprecated and marked for removal in 1.148.0:

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment

    • GET /api/v1/clusterconfig/kafka-queues/partition-assignment

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults

    The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.

  • The server.tar.gz release artifact has been deprecated. Users should switch to the OS/architecture-specific server-linux_x64.tar.gz or server-alpine_x64.tar.gz, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.

    We are making this change for the following reasons:

    • By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.

    • Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.

    • By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.

    The last release where server.tar.gz artifact is included will be 1.154.0.

  • The HUMIO_JVM_ARGS environment variable in the LogScale Launcher Script script will be removed in 1.154.0.

    The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Override garbage collection configuration within the launcher script.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • API

    • It is no longer possible to revive a query by polling it after it has been stopped.

      For more information, see Submitting Query Jobs.

  • Other

    • LogScale deletes humiotmp directories when gracefully shut down, but this can cause tmp directories to leak if LogScale crashes. LogScale now also deletes these directories on startup.

Upgrades

Changes that may occur or be required during an upgrade.

  • Installation and Deployment

    • The Kafka client has been upgraded to 3.7.0. The Kafka server version in the deprecated humio/kafka Docker image is also upgraded to 3.7.0.

  • Other

    • Bundled JDK upgraded to 22.0.1.

New features and improvements

  • Installation and Deployment

    • Changing the NODE_ROLES of a host is now forbidden. A host will now crash if the role it is configured to have doesn't match what is listed in global for that host. People wishing to change the role of a host in a cluster should instead remove that host from the cluster by unregistering it, wipe the data directory of the host, and boot the node back into the cluster as if it were a completely new node. The node will be assigned a new vhost identifier when doing this.

    • Unused modules have been removed from the JDK bundled with LogScale releases, thus reducing the size of release artifacts.

  • UI Changes

    • Time zone data has been updated to IANA 2024a and has been trimmed to +/- 5 years from the release date of IANA 2024a.

    • Layout changes have been made in the Connections UI page.

      For more information, see Connections.

    • The maximum limit for saved query names has been set to 200 characters.

    • A new Field list column type has been added in the Event List. It formats all fields in the event in key-value pairs by grouping a field list by prefix.

      For more information, see Column Properties.

    • The warnings for numbers out of the browser's safe number range have been slightly modified.

      For more information, see Troubleshooting: UI Warning: The actual value is different from what is displayed.

  • Automation and Alerts

    • Scheduled Reports can now be created. Scheduled Reports generate reports directly from dashboards and send them to the selected email addresses on a regular schedule.

      For more information, see Scheduled PDF Reports.

  • GraphQL API

    • A new unsetDynamicConfig GraphQL mutation is introduced to unset dynamic configurations.

    • Added a new GraphQL API generateParserFromTemplate() for decoding a parser YAML template without installing it.

  • API

    • Upgrade to the latest Jakarta Mail API to prevent a warning message from being logged about a missing mail configuration file.

    • Information about files used in a query is now added to the query result returned by the API.

  • Configuration

    • The EXACT_MATCH_LIMIT configuration has been removed. It is no longer needed, since files are limited by size instead of rows.

    • When UNSAFE_RELAX_MULTI_CLUSTER_PROTOCOL_VERSION_CHECK is set to ensure Multi-Cluster Compatibility Across Versions, attempting to search in clusters older than version 1.131.2 is not allowed and a UI message will now be displayed.

    • A new QueryBacktrackingLimit dynamic configuration is available through GraphQL as experimental. It allows to limit a query iterating over individual events too many times (which may happen with an excessive use of copyEvent(), join() and split() functions, or regex() with repeat-flags). The default for this limit is 3,000 and can be modified with the dynamic configuration. At present, the feature flag sets this limit off by default.

  • Ingestion

    • Self-hosted only: derived tags (like #repo) are now included when executing Event Forwarding Rules. These fields will be included in the forwarded events unless filtered by select() or drop(#repo) in the rule.

    • Audit logs related to Event Forwarders no longer include the properties of the event forwarder.

      Event forwarder disablement is now audit logged with type disable instead of enable.

    • The parser assertions can now be written and loaded to YAML files, using the V3 parser format.

  • Dashboards and Widgets

    • A parameter panel widget type has been added to allow users to drag parameters from the top panel and into these panels. Also, a parameter's width is now adjustable in the settings.

      For more information, see Parameter Panel Widget.

  • Log Collector

    • Fleet Management now supports ephemeral hosts. If a collector is enrolled with the parameter --ephemeralTimeout, after being offline for the specified duration in hours it will disappear from the Fleet Overview interface and be unenrolled. The feature requires LogScale Collector version 1.7.0 or above.

    • Live and Historic options for Fleet Overview are introduced. When Live, the overview will show online collectors and continuously be updated with e.g. new CPU metrics or status changes. The Historic view will display all records of collectors for the last 30 days. In this case the overview will not be updated with new information.

      For more information, see Switching between Live and Historic overview.

  • Functions

    • The onlyTrue parameter has been added to the bitfield:extractFlags() query function, it allows to output only flags whose value is true.

      For more information, see bitfield:extractFlags().

    • Multi-valued arguments can now be passed to a saved query.

      For more information, see User Functions (Saved Searches).

    • array:filter has been fixed as performing a filter test on an array field outputted from this function would sometimes lead to no results.

    • The query editor now gives warnings about certain regex constructs that are valid but suboptimal. Specifically, quantified wildcards in the beginning or end of an (unanchored) regex.

  • Other

    • A new metric max_ingest_delay is introduced to keep track of the current maximum ingest delay across all Kafka partitions.

    • Two new metrics are introduced:

      • internal-throttled-poll-rate keeps track of the number of times polling workers during query execution was throttled due to rate limiting.

      • internal-throttled-poll-wait-time keeps track of maximum delays per poll round due to rate limiting.

Fixed in this release

  • Storage

    • Taking nodes offline in a cluster that does not use bucket storage could prevent cleanup of minisegments associated with merge targets owned by the offline nodes, causing global to grow. To solve this, the cluster now moves merge targets that have not yet achieved full replication to follow digest nodes.

    • The Did not query segment error spuriously appearing when the cluster performs digest reassignment has now been fixed.

    • The file synchronization job would stop if upload to bucket storage fails. This issue has been fixed.

  • Dashboards and Widgets

    • Dragging a parameter to an empty Parameter Panel Widget would sometimes not move the parameter. This issue has been fixed.

    • The execution of dashboard parameter queries has been changed to only run as live when the dashboard itself is live.

  • Functions

    • The time:xxx() functions have been fixed as they did not correctly use the query's timezone as default. The offset was applied in an opposite manner, such that for example GMT+2 was applied as GMT-2. This has now been fixed.

    • The query editor has been fixed as field auto-completions would sometimes not be suggested.

    • The query editor would mark the entire query as erroneous when count() was given with distinct=true parameter but missing an argument for the field parameter. This issue has been fixed.

  • Other

    • A regression introduced in version 1.132 has been fixed, where a file name starting with shared/ would be recognized as a shared file instead of a regular file. However, a shared file should be referred to using exactly /shared/ as a prefix.

    • Fixing a very rare edge case that could cause creation of malformed entities in global when a nested entity — such as a datasource — was deleted.

Improvement

  • UI Changes

    • When a saved query is used, the query editor will display the query string when hovering over it.

  • Storage

    • Logging improvements have been made around bucket uploads to assist with troubleshooting slow uploads, which are only seen in clusters with very large data sets.

  • Packages

    • Validate that there are no duplicate names used for each package template type during package installations (for example you cannot use the same name for multiple parsers that are part of the same package).