Humio Server 1.14.0 Stable (2020-08-26)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

JDK

Compatibility?

Req. Data

Migration

Config.

Changes?
1.14.0Stable2020-08-26

Cloud

On-Prem

2021-08-31No1.12.011NoNo
JAR ChecksumValue
MD5d0ede2c5d1075119507701bff7a04b29
SHA1b4fc3f50fabe0abdea5db2a2b502c0b2b0b71aa7
SHA256e9ddafa574576eb890cf22d241e0307caf613cc5b1bd9fdc84e50e975a40d67b
SHA51216506530541f87579660b630265171c137b8de787b5c5d11b145fc1d18ff04038514b91469b040212dbd27bddc2cee4cb5cca0054f547917624137fedb23ba20

Bug fixes and updates.

Free Text Search, Load Balancing of Queries and TLS Support. This release promotes the latest 1.13 release from preview to stable. To see more details, go through the individual 1.13.x release notes (links in the changelog).

Free text search now searches all fields rather than only the @rawstring field.

Humio can now balance and reuse existing queries internally in the cluster. Load balancer configuration to achieve this is no longer needed. See Configuration Settings and Manual Cluster Deployment.

TLS encrypts communication using TLS to/from Zookeeper, Kafka, and other Humio nodes.

IPlocation Database Management Changed

The database used as data source for the ipLocation() query function must be updated within 30 days when a new version of the database is made public by MaxMind. To comply with this, the database is no longer shipped as part of the Humio artifacts but will either:

  • Be fetched automatically by Humio provided that Humio is allowed to connect to the db update service hosted by Humio. This is the default behaviour.

  • Have to be updated manually (See ipLocation() reference page).

If the database cannot be automatically updated and no database is provided manually, the ipLocation() query function will no longer work.

Controlling what nodes to use as query coordinators. Due to the load balancing in Humio, customers that previously relied on load balancing to control which nodes are query coordinators now need to set QUERY_COORDINATOR to false on nodes they do not want to become query coordinators. See Manual Cluster Deployment and Configuration Settings.

Bug Fixes

  • Configuration

    • Improved handling of query restarts to avoid unnecessary restarts in some scenarios.

    • Handling of digest in the case where a node has been offline for a long time has been improved. As an example, running a Humio cluster with a replication factor of 2 and having one node go offline for a long time would leave some ingested data to only reside on one Humio node (and on the ingest queue in Kafka). But this data would not be regarded as properly replicated until the second node returned. If the only node that was left handling a digest partition did a failover, Humio would end up going far back on the Kafka ingest queue to reread data. This has been changed. Now another node from the set of digest nodes will take over if a node goes offline, to keep the replication factor as desired. This means that other hosts, than those specified for a given digest partition on the cluster management page, can actually be handling the digest data for that partition. Only digest nodes will be selected as hosts.