Humio Server 1.2.8 Archive (2019-01-17)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.2.8 | Archive | 2019-01-17 | Cloud | 2020-11-30 | No | 1.2.0 | No |
Available for download two days after release.
JAR Checksum | Value |
---|---|
MD5 | 988ba57c9b42c57a996e3cca5874e4f8 |
SHA1 | eff65659bcef942f04e32f13882dd89dd592c00b |
SHA256 | 8ea328361f1a1b0d08177696aecc0239e1802caffd971e93ffc2302bc4bb912b |
SHA512 | d7f7ea99cc6de3b72adb419ffc52095c1ec7c02b9bc436bd73de7998b04429747c660faa2f672d8a8995c574d9321211000c8586eff753a37c7c1505826da8a3 |
Maintenance Build
Fixed in this release
Summary
Live queries in a cluster where not all servers had digest partitions could lead to events being stuck in the result when they should have been outside the query range at that point int time.
Better names for the metrics exposed on JMX. They are all in the com.humio.metrics package.
Cloning built-in parsers made them read-only which was not intentional.
Config
KAFKA_DELETES_ALLOWED
can be set to "true" to turn on deletes on the ingest queue even whenKAFKA_MANAGED_BY_HUMIO=false
.Support for applying a custom parser to input events from any "beat" ingester by assigning the parser to the ingest token.
Handle HTTP 413 errors when uploading too large files on the files page
Functions
New function, mostly for use in parsers scope:
parseCsv()
parses comma-separated fields into columns by name.