Humio Server 1.14.0 LTS (2020-08-26)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.14.0LTS2020-08-26

Cloud

2021-08-31No1.12.0No

Hide file hashes

Show file hashes

Bug fixes and updates.

Free Text Search, Load Balancing of Queries and TLS Support. This release promotes the latest 1.13 release from preview to stable. To see more details, go through the individual 1.13.x release notes (links in the changelog).

Free text search now searches all fields rather than only the @rawstring field.

Humio can now balance and reuse existing queries internally in the cluster. Load balancer configuration to achieve this is no longer needed. See Configuration Settings and Installing Using Containers.

TLS encrypts communication using TLS to/from ZooKeeper, Kafka, and other Humio nodes.

IPlocation Database Management Changed

The database used as data source for the ipLocation() query function must be updated within 30 days when a new version of the database is made public by MaxMind. To comply with this, the database is no longer shipped as part of the Humio artifacts but will either:

  • Be fetched automatically by Humio provided that Humio is allowed to connect to the db update service hosted by Humio. This is the default behaviour.

  • Have to be updated manually (See ipLocation() reference page).

If the database cannot be automatically updated and no database is provided manually, the ipLocation() query function will no longer work.

Controlling what nodes to use as query coordinators. Due to the load balancing in Humio, customers that previously relied on load balancing to control which nodes are query coordinators now need to set QUERY_COORDINATOR to false on nodes they do not want to become query coordinators. See Installing Using Containers and Configuration Settings.

Fixed in this release

  • Configuration

    • Improved handling of query restarts to avoid unnecessary restarts in some scenarios.

    • Handling of digest in the case where a node has been offline for a long time has been improved. As an example, running a Humio cluster with a replication factor of 2 and having one node go offline for a long time would leave some ingested data to only reside on one Humio node (and on the ingest queue in Kafka). But this data would not be regarded as properly replicated until the second node returned. If the only node that was left handling a digest partition did a failover, Humio would end up going far back on the Kafka ingest queue to reread data. This has been changed. Now another node from the set of digest nodes will take over if a node goes offline, to keep the replication factor as desired. This means that other hosts, than those specified for a given digest partition on the cluster management page, can actually be handling the digest data for that partition. Only digest nodes will be selected as hosts.