Falcon LogScale 1.70.2 LTS (2023-03-06)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.70.2LTS2023-03-06

Cloud

2024-01-31No1.44.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.70.2/server-1.70.2.tar.gz

These notes include entries from the following previous releases: 1.70.0, 1.70.1

Security fix and bug fixes.

Deprecation

Items that have been deprecated and may be removed in a future release.

Upgrades

Changes that may occur or be required during an upgrade.

  • Installation and Deployment

    • We have enabled a new vhost selection method by default. The way hosts select their vhost number when joining the cluster has changed, the new logic is described at Node Identifiers documentation page.

      The new logic does not depend on ZooKeeper, even for clusters where nodes occasionally lose disk contents, such as Kubernetes. In order to smooth migration for clusters using ZooKeeper, the new logic will still interact with ZooKeeper to avoid nodes using a mix of new and old vhost code from fighting over the vhost numbers. This is only necessary while migrating.

      The recommended steps for migrating off of ZooKeeper are as follows:

      1. Deploy the new LogScale version to all nodes.

      2. Remove ZOOKEEPER_URL_FOR_NODE_UUID, ZOOKEEPER_URL, ZOOKEEPER_PREFIX_FOR_NODE_UUID, ZOOKEEPER_SESSIONTIMEOUT_FOR_NODE_UUID from the configuration for all nodes.

      3. Reboot

      Once rebooted, LogScale will no longer need ZooKeeper directly, except as an indirect dependency of Kafka. Due to this, the 4 ZooKeeper-related variables are deprecated as of this release and will be removed in a future version.

      Since vhost numbers now change when a disk is wiped, cluster administrators for clusters using nodes where USING_EPHEMERAL_DISKS is set to true will need to ensure that the storage and digest partitioning tables are up to date as hosts join and leave the cluster. Updating the tables is handled automatically if using the LogScale Kubernetes operator, but for clusters that do not use this operator, cluster administrators should run scripts periodically to keep the storage and digest tables up to date. This is not a new requirement for ephemeral clusters, but we're providing a reminder here since it may be needed more frequently now.

      The cluster GraphQL query can provide updated tables (the suggestedIngestPartitions and suggestedStoragePartitions fields), which can then be applied via the updateIngestPartitionScheme and updateStoragePartitionScheme GraphQL mutations.

      Should you experience any issue in using this feature, you may opt out by setting NEW_VHOST_SELECTION_ENABLED=false. If you do this, please reach out to support with feedback, as we otherwise intend to remove the old vhost selection logic in the coming months.

      Note

      When using Operator and Kubernetes deployments, you must upgrade to 0.17.0 of operator to support migration away from the ZooKeeper requirement. See Operator Version 0.17.0.

  • Other

    • Kafka client has been upgraded to 3.4.0.

      Kafka broker has been upgraded to 3.4.0 in the Kafka container.

      The container upgrade is performed for security reasons to resolve CVE-2022-36944 issue, which Kafka should however not be affected by. If you wish to do a rolling upgrade of your Kafka containers, please always refer to Kafka upgrade guide.

New features and improvements

  • Dashboards and Widgets

    • Added support for export and import of dashboards with query based widgets which use a fixed time window.

  • Other

    • Add code to ensure all mini-segments for the same target end up located on the same hosts. A change in 1.63 could create a situation where mini-segments for the same merge target wound up on different nodes, which the query code currently assumes can't happen. This could cause Result is partial responses to user queries.

    • Ephemeral nodes are automatically removed from the cluster if they are offline for too long (2 hours by default).

    • New background task TagGroupingSuggestionsJob that reports on flow rate in repositories with many datasources on what it considers slow ones, controlled by configuration of segment sizes and flush intervals. The output in the log can be input to decision on add Tag Grouping to a repository to reduce the number of slow datasources.

Fixed in this release

  • Security

    • Update Netty to address CVE-2022-41915.

  • Automation and Alerts

    • Fixed a bug where a link in the notification for a failed alert would link to a non-existing page.

  • GraphQL API

    • Pending deletes that would cause nodes to fail to start, reporting a NullPointerException, have been fixed.

  • Configuration

  • Dashboards and Widgets

    • Fixed three bugs in the Bar Chart — where the sorting would be wrong with updating query results in the stacked version, flickering would occur when deselecting all series in the legend, and deselecting renamed series in the legend would not have any effect.

    • Scatter Chart has been updated:

      • The x-axis would not update correctly with updated query results

      • The trend line toggle in the style panel was invisible.

    • Fixed an issue with parameters in dashboards, where the values of a fixed list parameter would not have their order maintained when exporting and importing templates.

  • Other

    • Fixed a bug where very long string literals in a regex could cause a query/parser to fail with a stack overflow.

    • Unlimited waits for nodes to get in sync has been fixed. This caused digest coordination to fail, to limit the time allowed for a node to get "in sync" on a partition before leadership was assigned to it, in cases where the previous digest leader shut down gracefully.

    • Timeout from publish to global topic in Kafka has been fixed, as it resulted in marking input segments for merge as broken temporarily.

Known Issues