Humio Server 1.4.0 Archive (2019-02-14)
Version | Type | Release Date | End of Support | Upgrades From | Data Migration | Config. Changes |
---|---|---|---|---|---|---|
1.4.0 | Archive | 2019-02-14 | 2019-11-19 | 1.3.2 | No | No |
JAR Checksum | Value |
---|---|
MD5 | fb0290d5203f178cfbbef8df7b89106a |
SHA1 | 7e2f17d867734264c91c697849884bb530fbc450 |
SHA256 | cce65b639ab277dd50cf29f2d53ff119d705c64d33a4a118b8e49b899d8dd27c |
SHA512 | 7cfad54614ef63f35fe6d3cc50d2239650845b390e0aea63f1e2e199745a68854524fdb24cad5f59867339cbe0eb2432bd904cef7e7a90e9b115d5abe6f1a52a |
High availability for ingest and digest.
Bug Fixes
Summary
Segments are flushed after 30 minutes. This makes S3 archiving likely to be less than 40 minutes after the incoming stream.
Digest partitions can now be assigned to more than one host. Doing so enables the cluster to continue digesting incoming events if a single host is lost from the cluster.
If rolling back, make sure to roll back to version 1.3.2+
Clone existing dashboard when creating from the frontpage was broken.
Emphasis is on efficiency during normal operation over being efficient in the failure cases: After failure the cluster will need some time to recover during which ingested events will get delayed. The cluster needs to have ample cpu to catch up after such a fail-over. There are both new and reinterpreted configuration options in the config environment for controlling how the segments get build for this.
Functions
Limit
match()
/lookup()
functions to 20.0 rows or whateverMAX_STATE_LIMIT
is set to.