Humio Server 1.30.2 LTS (2021-11-19)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.30.2 | LTS | 2021-11-19 | Cloud | 2022-09-30 | No | 1.16.0 | No |
Hide file hashes
JAR Checksum | Value |
---|---|
MD5 | 2395ce5b8017a632b02372da3dc0159b |
SHA1 | 049f9a0ed9c4e9acafcefe1a65997b65ba57d3f7 |
SHA256 | 815bcce962ac9f43022424e2abdfa587f8377ba1ecf3b4c5ef423a43175fe424 |
SHA512 | bc93c9bbf9fe89eb0a279d265b775bc4b4590b897f7f08a31d2516cd767b4952c59e7a3bad9986b26592487246c8a54f2e29e85d9a2a248dc790418ec68627d7 |
These notes include entries from the following previous releases: 1.30.0, 1.30.1
Bug fixes related to version dependency, problems with incomplete URLS, as well as requiring organization level permissions in certain situations.
Fixed in this release
Security
Kafka and xmlsec have been upgraded to address CVE-2021-38153 and CVE-2021-38153.
Other
Fixed an issue where the UI page for new parser could have overflow in some browsers.
Fixed an issue where a URL without content other than the protocol would break installing a package.
Fixed an issue causing Humio to log MatchExceptions from the calculateStartPoint method.
Fixed an issue where the query scheduler would spend too much time "shelving" queries, and not enough on getting them executed, leading to little progress on queries.
On a node configured as
USING_EPHEMERAL_DISKS=true
allow the local disk management deleting files even if a query may need them later, as the system is able to re-fetch the files from bucket storage when required. This improves the situation when there are active queries that in total have requested access to more segments than the local disk can hold.Fixed an issue where the job responsible for deleting segment files off nodes was not running as often as expected.
Require organization level permission when changing role permissions that possibly affects all views and repositories.
Fixed an issue where the job responsible for deleting segment files off nodes was not deleting as many segments as it should.
Updated a dependency to a version fixing a critical bug.
Fixed an issue where offsets from one Kafka partition could be used when deciding where to start consuming for another partition, in the case where there are too many datasources in the repo. This led to a crash loop when the affected node was restarted.