Humio Server 1.2.8 Archive (2019-01-17)

VersionTypeRelease DateEnd of SupportUpgrades FromData MigrationConfig. Changes
1.2.8Archive2019-01-172019-11-191.2.0NoNo
JAR ChecksumValue
MD5988ba57c9b42c57a996e3cca5874e4f8
SHA1eff65659bcef942f04e32f13882dd89dd592c00b
SHA2568ea328361f1a1b0d08177696aecc0239e1802caffd971e93ffc2302bc4bb912b
SHA512d7f7ea99cc6de3b72adb419ffc52095c1ec7c02b9bc436bd73de7998b04429747c660faa2f672d8a8995c574d9321211000c8586eff753a37c7c1505826da8a3

Maintenance Build

Bug Fixes

  • Summary

    • Config "KAFKA_DELETES_ALLOWED" can be set to "true" to turn on deletes on the ingest queue even when KAFKA_MANAGED_BY_HUMIO=false

    • Cloning built-in parsers made them read-only which was not intentional.

    • Live queries in a cluster where not all servers had digest partitions could lead to events being stuck in the result when they should have been outside the query range at that point int time.

    • Handle HTTP 413 errors when uploading too large files on the files page

    • Support for applying a custom parser to input events from any "beat" ingester by assigning the parser to the ingest token.

    • Better names for the metrics exposed on JMX. They are all in the com.humio.metrics package.

  • Functions

    • New function, mostly for use in parsers scope: parseCsv() parses comma-separated fields into columns by name.