Falcon LogScale 1.138.0 Preview (2024-05-14)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

JDK

Compatibility?

Config.

Changes?
1.138.0Preview2024-05-14

Cloud

On-Prem

2025-07-31Yes1.11217-21No

Bug fixes and updates.

Advanced Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The -XX:ActiveProcessorCount=n command-line option will be ignored if set. Users that need to configure the core count manually should set CORES=n environmant variable instead. This will cause the launcher to configure both LogScale and the JVM properly.

      This change is scheduled for 1.148.0.

      For more information, see Configuring Available CPU Cores.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The any argument to the type parameter of sort() and table() has been deprecated and will be removed in version 1.142.

    Warnings prompts will be shown in queries that fall into either of these two cases:

    • If you are explicitly supplying an any argument, please either simply remove both the parameter and the argument, for example change sort(..., type=any) to sort(...) or supply the argument for type that corresponds to your data.

    • If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of type parameter to hex e.g. sort(..., type=hex).

    In all other cases, no action is needed.

    The new default value for sort() and table() will be number. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument for type.

  • The following API endpoints are deprecated and marked for removal in 1.148.0:

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment

    • GET /api/v1/clusterconfig/kafka-queues/partition-assignment

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults

    The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.

  • The server.tar.gz release artifact has been deprecated. Users should switch to the OS/architecture-specific server-linux_x64.tar.gz or server-alpine_x64.tar.gz, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.

    We are making this change for the following reasons:

    • By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.

    • Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.

    • By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.

    The last release where server.tar.gz artifact is included will be 1.154.0.

  • The HUMIO_JVM_ARGS environment variable in the LogScale Launcher Script script will be removed in 1.154.0.

    The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Override garbage collection configuration within the launcher script.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

  • The following GraphQL queries and mutations for interacting with parsers are deprecated and scheduled for removal in version 1.142.

    • The deprecated createParser mutation is replaced by createParserV2(). The differences between the old and new mutation are:

      • testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • fieldsToBeRemovedBeforeParsing can now be specified as part of the parser creation.

      • force field is renamed to allowOverwritingExistingParser.

      • sourceCode field is renamed to script.

      • tagFields field is renamed to fieldsToTag.

      • languageVersion is no longer an enum, but a LanguageVersionInputType instead.

      • The mutation returns a Parser, instead of a Parser wrapped in an object.

      • The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.

    • The deprecated removeParser mutation is replaced by deleteParser. The difference between the old and new mutation is:

      • The mutation returns boolean to represent success or failure, instead of a Parser wrapped in an object.

    • The deprecated testParser mutation is replaced by testParserV2(). The differences between the old and new mutation are:

      • The test cases are now structured types, instead of just being strings. To emulate the old API, take the test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • The new test cases can contain assertions about the contents of the output.

      • The mutation output is significantly different from before, as it provides more detailed information on how a test case has failed.

      • The mutation now accepts both a language version and list of fields to be removed before parsing.

      • The parserScript field is renamed to script.

      • The tagFields field is renamed to fieldsToTag.

    • The deprecated updateParser mutation is replaced by updateParserV2() where more extensive test cases can be set. Continuing to use the previous API may result in test information on parsers being lost. To ensure information is not unintentionally erased, please migrate away from the deprecated APIs for both reading and updating parser test cases and use updateParserV2() instead. The differences between the previous and the new mutation are:

      • testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the `ParserTestEventInput` inside the `ParserTestCaseInput`, and they will behave the same as before.

      • sourceCode field, used to updating the parser script, is changed to the script field, which takes a UpdateParserScriptInput object. This updates the parser script and the language version together.

      • tagFields field is renamed to fieldsToTag.

      • The languageVersion is located inside the UpdateParserScriptInput object, and is no longer an enum, but a LanguageVersionInputType instead.

      • The repositoryName and id fields are now correctly marked as mandatory in the schema. Previously this wasn't the case, even though the mutation would fail without them.

      • The mutation returns a Parser, instead of a Parser wrapped in an object.

      • The old mutation had a bug where it would overwrite the languageVersion with a default value in some cases, which is fixed in the new one.

      • The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.

    On the Parser type:

    • testData field is deprecated and replaced by testCases.

    • sourceCode field is deprecated and replaced by script.

    • tagFields field is deprecated and replaced by fieldsToTag.

    For more information, see Parser, DeleteParserInput, LanguageVersionInputType, createParserV2(), testParserV2(), updateParserV2().

Upgrades

Changes that may occur or be required during an upgrade.

  • Installation and Deployment

    • The Kafka client has been upgraded to 3.7.0. The Kafka server version in the deprecated humio/kafka Docker image is also upgraded to 3.7.0.

New features and improvements

  • Installation and Deployment

    • Changing the NODE_ROLES of a host is now forbidden. A host will now crash if the role it is configured to have doesn't match what is listed in global for that host. People wishing to change the role of a host in a cluster should instead remove that host from the cluster by unregistering it, wipe the data directory of the host, and boot the node back into the cluster as if it were a completely new node. The node will be assigned a new vhost identifier when doing this.

    • Unused modules have been removed from the JDK bundled with LogScale releases, thus reducing the size of release artifacts.

  • UI Changes

    • Layout changes have been made in the Connections UI page.

      For more information, see Connections.

  • GraphQL API

    • A new unsetDynamicConfig GraphQL mutation is introduced to unset dynamic configurations.

  • Ingestion

    • Self-hosted only: derived tags (like #repo) are now included when executing Event Forwarding Rules. These fields will be included in the forwarded events unless filtered by select() or drop(#repo) in the rule.

  • Functions

    • array:filter has been fixed as performing a filter test on an array field outputted from this function would sometimes lead to no results.

  • Other

    • A new metric max_ingest_delay is introduced to keep track of the current maximum ingest delay across all Kafka partitions.

Fixed in this release

  • Storage

    • Taking nodes offline in a cluster that does not use bucket storage could prevent cleanup of minisegments associated with merge targets owned by the offline nodes, causing global to grow. To solve this, the cluster now moves merge targets that have not yet achieved full replication to follow digest nodes.

    • The file synchronization job would stop if upload to bucket storage fails. This issue has been fixed.

  • Dashboards and Widgets

    • The execution of dashboard parameter queries has been changed to only run as live when the dashboard itself is live.

  • Other

    • Fixing a very rare edge case that could cause creation of malformed entities in global when a nested entity — such as a datasource — was deleted.

Improvement

  • Storage

    • Logging improvements have been made around bucket uploads to assist with troubleshooting slow uploads, which are only seen in clusters with very large data sets.