Humio Server 1.5.8 Archive (2019-04-25)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.5.8Archive2019-04-25

Cloud

2020-11-30No1.4.xYes

Available for download two days after release.

Hide file hashes

Show file hashes

New dashboard editing code and many other improvements

Fixed in this release

  • Summary

    • In tableview, if column data is of the form \[Label](URL) it is displayed as Label with a link to URL.

    • Dashboard queries that are not live and uses a timeinterval relative to now, are migrated to be live queries. Going forward, queries with timeintervals relative to now will be live queries when added to dashboards.

    • S3 archiving now supports forward proxies.

    • parseTimestamp() now handles dates, e.g. 31-08-2019.

    • @source and @host is now supported for Filebeat v7.

    • The Auth0 integration now supports importing Auth0-defined roles. New config AUTH0_ROLES_KEY identifies the name of the role attribute coming in the AWT token from Auth0. See new auth0 config options Map Auth0 Roles.

    • Validation of bucket and region when configuring S3 archiving.

    • Alerts notifiers with standard template did not produce valid JSON.

    • Built-in audit-log parser now handles a variable number of fractions of seconds.

    • Humio's own Jitrex regular expression engine is again the default one.

  • Configuration

    • Config property KAFKA_DELETES_ALLOWED has been removed and instead DELETE_ON_INGEST_QUEUE is introduced. DELETE_ON_INGEST_QUEUE is set to true by default. When this flag is set, Humio will delete data on the Kafka ingest queue, when data has been written in Humio. If the flag is not set, Humio will not delete from the ingest queue. No matter how this flag is set, it is important to configure retention for the queue in Kafka. If Kafka is managed by Humio, Humio will set a 48hour retention when creating the queue. This defines how long data can be kept on the ingest queue and thus how much time Humio has to read the data and store it internally.