Humio Server 1.30.1 LTS (2021-10-01)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.30.1 | LTS | 2021-10-01 | Cloud | 2022-09-30 | No | 1.16.0 | Yes |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 790fc08715648deadf23b204f6e77cc9 |
SHA1 | 85ea236e0abbaf29740e7288d7cefeb2b1069260 |
SHA256 | e4f8dcc73fbeaa5dcc7d68aa6a972e3ab5ccbb66848c189743b2f50b8bcea832 |
SHA512 | 963ec5f550f5b496b08c9025e3fa9ed08c563e4270973092a4a1944a05bd79192316f3324d9a58e78dd014c2119ab389c1d6c566ef395b73c4df96f6d216e2c2 |
These notes include entries from the following previous releases: 1.30.0
Fixes Humio ignoring MatchExceptions, the frequency of jobs
which delete segment files, problems with
USING_EPHEMERAL_DISKS
, and upgrades Kafka and
xmlsec addresses.
Fixed in this release
Security
Kafka and xmlsec have been upgraded to address CVE-2021-38153 and CVE-2021-38153.
Other
Fixed an issue where the UI page for new parser could have overflow in some browsers.
Fixed an issue causing Humio to log MatchExceptions from the calculateStartPoint method.
Fixed an issue where the query scheduler would spend too much time "shelving" queries, and not enough on getting them executed, leading to little progress on queries.
On a node configured as
USING_EPHEMERAL_DISKS=true
allow the local disk management deleting files even if a query may need them later, as the system is able to re-fetch the files from bucket storage when required. This improves the situation when there are active queries that in total have requested access to more segments than the local disk can hold.Fixed an issue where the job responsible for deleting segment files off nodes was not running as often as expected.
Fixed an issue where the job responsible for deleting segment files off nodes was not deleting as many segments as it should.
Fixed an issue where offsets from one Kafka partition could be used when deciding where to start consuming for another partition, in the case where there are too many datasources in the repo. This led to a crash loop when the affected node was restarted.