Humio Server 1.14.5 LTS (2020-10-21)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.14.5LTS2020-10-21

Cloud

2021-08-31No1.12.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.14.0, 1.14.1, 1.14.2, 1.14.3, 1.14.4

Bug Fixes and New Metric

Fixed in this release

  • Summary

    • Fixed a problem where too many segments could be generated when restarting nodes.

    • Fixed an issue where Humio could behave incompatibly with Kafka versions prior to 2.3.0 if KAFKA_MANAGED_BY_HUMIO was true.

    • Fix missing cache update when deleting a view.

    • Changed limits for what can be fetched via HTTP from inside Humio.

    • Changed the query scheduling to account for the work of the overall query, rather than per job started. This allows fairer scheduling of queries hitting many dataspaces e.g. when using search-all.

    • Improve naming of threads to get more usable thread dumps.

    • Fixed a race condition when cleaning up datasources.

    • Log Humio cluster version in non-sensitive log.

    • The job for updating the IP location database now uses the configured HTTP proxy, if present.

    • Add logging to detect issues when truncating finished files.

    • New metrics for scheduling of queries:

      • local-query-jobs-wait: Histogram of time in milliseconds that each query waited between getting any work done including exports

      • local-query-jobs-queue: Count queries currently queued or active on node including exports

      • local-query-segments-queue-exports-part: Count of elements in queue as number of segments currently queued for query for exports

      • local-query-jobs-queue-exports-part: Count queries currently queued or active on node for exports

    • Improve performance when processing streaming queries.

    • Added log rotation for humio-non-sensitive logs.

    • Change priorities when fetching segments to a node which have been offline for a longer period. This avoids waiting too long before the cluster becomes fully synced.

    • Include user email in metrics when queries end.

    • Fixed a problem where some deleted segments could show up as missing.

    • Fixed an issue where Humio might attempt to write a larger message to Kafka than what Kafka allows.

    • Remove restriction on expire time when creating emergency user through the emergency user API. See Enabling Emergency Access.

    • Remove restriction on length of group names from LDAP.

    • Fixed an issue where a slow data stream could cause Humio to retain more data in Kafka than necessary, as well as cause a restarted Humio node to reprocess too much data.

    • Fixed a problem where duplicated uploaded files would not be deleted from /tmp.

    • Improved handling of data replication when nodes are offline.

    • Avoid overloading Kafka with updates for the global database by collecting operations in bulk.

    • Improve handling of sub-queries polling state from the main query when using join().

    • Added new metric jvm-hiccup for measuring stalls/pauses in the JVM.

    • Fixed a problem where segments could be downloaded to stateless frontend nodes from Bucket storage.

    • Fixed several cases where Humio might attempt to write a larger message to Kafka than Kafka allows.

    • HEC endpoint is now strictly validated as documented for top-level fields, which means non-valid input will be rejected. See Ingesting with HTTP Event Collector (HEC).

  • Configuration

    • Improved handling of query restarts to avoid unnecessary restarts in some scenarios.

    • Handling of digest in the case where a node has been offline for a long time has been improved. As an example, running a Humio cluster with a replication factor of 2 and having one node go offline for a long time would leave some ingested data to only reside on one Humio node (and on the ingest queue in Kafka). But this data would not be regarded as properly replicated until the second node returned. If the only node that was left handling a digest partition did a failover, Humio would end up going far back on the Kafka ingest queue to reread data. This has been changed. Now another node from the set of digest nodes will take over if a node goes offline, to keep the replication factor as desired. This means that other hosts, than those specified for a given digest partition on the cluster management page, can actually be handling the digest data for that partition. Only digest nodes will be selected as hosts.