Humio Server 1.14.1 LTS (2020-09-08)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.14.1 | LTS | 2020-09-08 | Cloud | 2021-08-31 | No | 1.12.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | b57e75be1b07018a158585f04cdcb9d8 |
SHA1 | 6e1754ba60abeb35233a728dcb78ae11f0986d8a |
SHA256 | 5939bb412601b4356ccc431d87e3e8290a48db967a0739f638b0ea587e1a9eb7 |
SHA512 | a90b03b4081cd8ee73d06c0f925705740c769ff405b09cb1c06a51b8566775d7e2540bd6275ae1f8f7e4e0a65d241d7f1f195d26c6c3474894639d6b19b7d3d3 |
These notes include entries from the following previous releases: 1.14.0
Bug fixes and updates.
Fixed in this release
Summary
Improve performance when processing streaming queries.
Remove restriction on expire time when creating emergency user through the emergency user API. See Enabling Emergency Access.
Remove restriction on length of group names from LDAP.
Configuration
Improved handling of query restarts to avoid unnecessary restarts in some scenarios.
Handling of digest in the case where a node has been offline for a long time has been improved. As an example, running a Humio cluster with a replication factor of 2 and having one node go offline for a long time would leave some ingested data to only reside on one Humio node (and on the ingest queue in Kafka). But this data would not be regarded as properly replicated until the second node returned. If the only node that was left handling a digest partition did a failover, Humio would end up going far back on the Kafka ingest queue to reread data. This has been changed. Now another node from the set of digest nodes will take over if a node goes offline, to keep the replication factor as desired. This means that other hosts, than those specified for a given digest partition on the cluster management page, can actually be handling the digest data for that partition. Only digest nodes will be selected as hosts.