Humio Server 1.14.2 LTS (2020-09-17)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.14.2LTS2020-09-17

Cloud

2021-08-31No1.12.0No

Hide file hashes

Show file hashes

These notes include entries from the following previous releases: 1.14.0, 1.14.1

Bug Fixes, HEC Endpoint Validation and New Metrics

Fixed in this release

  • Summary

    • Fixed a problem where too many segments could be generated when restarting nodes.

    • Fixed a race condition when cleaning up datasources.

    • The job for updating the IP location database now uses the configured HTTP proxy, if present.

    • New metrics for scheduling of queries:

      • local-query-jobs-wait: Histogram of time in milliseconds that each query waited between getting any work done including exports

      • local-query-jobs-queue: Count queries currently queued or active on node including exports

      • local-query-segments-queue-exports-part: Count of elements in queue as number of segments currently queued for query for exports

      • local-query-jobs-queue-exports-part: Count queries currently queued or active on node for exports

    • Improve performance when processing streaming queries.

    • Added log rotation for humio-non-sensitive logs.

    • Include user email in metrics when queries end.

    • Remove restriction on expire time when creating emergency user through the emergency user API. See Enabling Emergency Access.

    • Remove restriction on length of group names from LDAP.

    • Improved handling of data replication when nodes are offline.

    • Fixed a problem where segments could be downloaded to stateless frontend nodes from Bucket storage.

    • HEC endpoint is now strictly validated as documented for top-level fields, which means non-valid input will be rejected. See Ingesting with HTTP Event Collector (HEC).

  • Configuration

    • Improved handling of query restarts to avoid unnecessary restarts in some scenarios.

    • Handling of digest in the case where a node has been offline for a long time has been improved. As an example, running a Humio cluster with a replication factor of 2 and having one node go offline for a long time would leave some ingested data to only reside on one Humio node (and on the ingest queue in Kafka). But this data would not be regarded as properly replicated until the second node returned. If the only node that was left handling a digest partition did a failover, Humio would end up going far back on the Kafka ingest queue to reread data. This has been changed. Now another node from the set of digest nodes will take over if a node goes offline, to keep the replication factor as desired. This means that other hosts, than those specified for a given digest partition on the cluster management page, can actually be handling the digest data for that partition. Only digest nodes will be selected as hosts.