Humio Server 1.5.8 Archive (2019-04-25)
Version | Type | Release Date | End of Support | Upgrades From | JDK Compatibility | Data Migration | Config. Changes |
---|---|---|---|---|---|---|---|
1.5.8 | Archive | 2019-04-25 | 2019-11-19 | 1.4.x | 11 | No | Yes |
JAR Checksum | Value |
---|---|
MD5 | 4898c50c07b21c557e16f9ecb0525b64 |
SHA1 | 6f7de5e75418ab04752927081ac4af0156b78df9 |
SHA256 | 449c36c1b9cf793db02250e1d089594491fde458f46b33b2b4b2967ef7e0bef7 |
SHA512 | d382620aa86df5fc7d24977d6097f1d40829e5b1c5cce5431ce6110ca256be99a636bdb1d5b0d322fee1cc784d55f7b4cc12ae78da059b4089cbb9739494e7e0 |
New dashboard editing code and many other improvements
Bug Fixes
Summary
Humio's own Jitrex regular expression engine is again the default one.
Alerts notifiers with standard template did not produce valid JSON.
parseTimestamp()
now handles dates, e.g. 31-08-2019.The Auth0 integration now supports importing Auth0-defined roles. New config
AUTH0_ROLES_KEY
identifies the name of the role attribute coming in the AWT token from Auth0. See new auth0 config options Map Auth0 Roles.@source and @host is now supported for Filebeat v7.
Dashboard queries that are not live and uses a timeinterval relative to now, are migrated to be live queries. Going forward, queries with timeintervals relative to now will be live queries when added to dashboards.
In tableview, if column data is of the form
\[Label](URL)
it is displayed asLabel
with a link to URL.S3 archiving now supports forward proxies.
Validation of bucket and region when configuring S3 archiving.
Built-in audit-log parser now handles a variable number of fractions of seconds.
Configuration
Config property
>KAFKA_DELETES_ALLOWED
has been removed and insteadDELETE_ON_INGEST_QUEUE
is introduced.DELETE_ON_INGEST_QUEUE
is set totrue
by default. When this flag is set, Humio will delete data on the Kafka ingest queue, when data has been written in Humio. If the flag is not set, Humio will not delete from the ingest queue. No matter how this flag is set, it is important to configure retention for the queue in Kafka. If Kafka is managed by Humio, Humio will set a 48hour retention when creating the queue. This defines how long data can be kept on the ingest queue and thus how much time Humio has to read the data and store it internally.