Humio Server 1.5.8 Archive (2019-04-25)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

JDK

Compatibility?

Req. Data

Migration

Config.

Changes?
1.5.8Archive2019-04-25

Cloud

On-Prem

2020-11-30No1.4.x11NoYes
JAR ChecksumValue
MD54898c50c07b21c557e16f9ecb0525b64
SHA16f7de5e75418ab04752927081ac4af0156b78df9
SHA256449c36c1b9cf793db02250e1d089594491fde458f46b33b2b4b2967ef7e0bef7
SHA512d382620aa86df5fc7d24977d6097f1d40829e5b1c5cce5431ce6110ca256be99a636bdb1d5b0d322fee1cc784d55f7b4cc12ae78da059b4089cbb9739494e7e0

New dashboard editing code and many other improvements

Bug Fixes

  • Summary

    • In tableview, if column data is of the form \[Label](URL) it is displayed as Label with a link to URL.

    • Dashboard queries that are not live and uses a timeinterval relative to now, are migrated to be live queries. Going forward, queries with timeintervals relative to now will be live queries when added to dashboards.

    • S3 archiving now supports forward proxies.

    • parseTimestamp() now handles dates, e.g. 31-08-2019.

    • @source and @host is now supported for Filebeat v7.

    • The Auth0 integration now supports importing Auth0-defined roles. New config AUTH0_ROLES_KEY identifies the name of the role attribute coming in the AWT token from Auth0. See new auth0 config options Map Auth0 Roles.

    • Validation of bucket and region when configuring S3 archiving.

    • Alerts notifiers with standard template did not produce valid JSON.

    • Built-in audit-log parser now handles a variable number of fractions of seconds.

    • Humio's own Jitrex regular expression engine is again the default one.

  • Configuration

    • Config property KAFKA_DELETES_ALLOWED has been removed and instead DELETE_ON_INGEST_QUEUE is introduced. DELETE_ON_INGEST_QUEUE is set to true by default. When this flag is set, Humio will delete data on the Kafka ingest queue, when data has been written in Humio. If the flag is not set, Humio will not delete from the ingest queue. No matter how this flag is set, it is important to configure retention for the queue in Kafka. If Kafka is managed by Humio, Humio will set a 48hour retention when creating the queue. This defines how long data can be kept on the ingest queue and thus how much time Humio has to read the data and store it internally.