Humio Server 1.14.0 Stable (2020-08-26)
|End of Support
|Req. Data Migration
Bug fixes and updates.
Free Text Search, Load Balancing of Queries and TLS Support. This release promotes the latest 1.13 release from preview to stable. To see more details, go through the individual 1.13.x release notes (links in the changelog).
Free text search now searches all fields rather than only the
TLS encrypts communication using TLS to/from Zookeeper, Kafka, and other Humio nodes.
IPlocation Database Management Changed
The database used as data source for the
ipLocation() query function must be updated
within 30 days when a new version of the database is made public
by MaxMind. To comply with this, the database is no longer
shipped as part of the Humio artifacts but will either:
Be fetched automatically by Humio provided that Humio is allowed to connect to the db update service hosted by Humio. This is the default behaviour.
Have to be updated manually (See
If the database cannot be automatically updated and no database
is provided manually, the
query function will no longer work.
Controlling what nodes to use as query coordinators. Due to the
load balancing in Humio, customers that previously relied on
load balancing to control which nodes are query coordinators now
need to set
QUERY_COORDINATOR to false on nodes
they do not want to become query coordinators. See
Manual Cluster Deployment and
Handling of digest in the case where a node has been offline for a long time has been improved. As an example, running a Humio cluster with a replication factor of 2 and having one node go offline for a long time would leave some ingested data to only reside on one Humio node (and on the ingest queue in Kafka). But this data would not be regarded as properly replicated until the second node returned. If the only node that was left handling a digest partition did a failover, Humio would end up going far back on the Kafka ingest queue to reread data. This has been changed. Now another node from the set of digest nodes will take over if a node goes offline, to keep the replication factor as desired. This means that other hosts, than those specified for a given digest partition on the cluster management page, can actually be handling the digest data for that partition. Only digest nodes will be selected as hosts.
Improved handling of query restarts to avoid unnecessary restarts in some scenarios.