Humio Server 1.7.2 Preview (2020-01-16)
|Version||Type||Release Date||End of Support||Upgrades From||Data Migration||Config. Changes|
Change: When the system starts with no users at all, the first user to log get root privileges inside the system.
Bucket storage: Support download after switching provider from S3 to GCP or vice versa.
Bug fix: Restart of queries using lookup/match/cidr when the uploaded file changes only worked for top-level functions, not when nested inside another function.
New config: LIVEQUERY_CANCEL_TRIGGER_DELAY_MS and LIVEQUERY_CANCEL_COST_PERCENTAGE controls canceling of live queries that have been consuming the most cost for the previous 30s when the system experiences digest latency of more than the delay. New metrics:
Bucket storage: Also keep copies of the "metadata files" that you use for lookup and match functions in the bucket and restore from there when needed.
Bug fix: Retention could in fail to delete obsolete files in certain cases.
Top(x, sum=y)now also support non-integer values of y (even though the internal state is still an integer value)
Bug fix: The Zookeeper status page now shows a warning when the commands it needs for the status page to work are not whitelisted on the ZK server.
#repo=*never matched but should always match.
Bug fix: Bucket storage, GCP variant: Remove temporary files after download from GCP. Previous versions left a copy in the tmp dir.
New Utility inside the jar. Usage
java -cp humio.jar com.humio.main.DecryptAESBucketStorageFile <secret string> <encrypted file> <decrypted file>. Allows decrypting a file that was uploaded using bucket storage outside the system.
Bucket storage: Continue cleaning the old buckets after switching provider from S3 to GCP or vice versa.
New config: LIVEQUERY_STALE_CANCEL_TRIGGER_DELAY_MS and LIVEQUERY_STALE_CANCEL_COST_PERCENTAGE controls discard of live queries that have not been polled by a client for a while when the system experiences digest latency of more than the delay.
New config: LOG4J_CONFIGURATION allows a custom log4j file. Or set to one of the built-in:
log4j2-stdout.xmlto get the log in plain text dumped on stdout, or
log4j2-stdout-json.xmlto get the log in NDJSON format, one line for each event on stdout.
Bug fix: Query of segments only present in a bucket now works even if disabling further uploads to bucket storage.
The "query monitor" and "query quota" new share the definition of "cost points". The definition has changed in such a way that quotas saved by version up to 1.7.1 and earlier are disregarded by this (and later) versions.
New config: USING_EPHEMERAL_DISKS allows running a cluster on disks that may be lost when the system restarts by assuming that only copies in Bucket Storage and the events in Kafka are preserved across restarts. If the filesystem remains during restart this is also okay in this mode and more efficient then fetching the files from the bucket.