Published on 2021-02-22 00:00:00

Humio 1.21.0

Complete UI Revamp

Version

Type

Release Date

End of Support

Upgrades From

Data Migration

Config. Changes

1.21.0

Preview

2021-02-22

2021-03-02

1.16.0

No

No

JAR Checksum

Value

MD5

3175d041a4c0a6948d5e23993b7a3bcd

SHA1

1356a57098602623b5cab8511f530aab3b04a080

SHA256

8f576aca2a00533180ed3710971bd9c4c419e275d618c4c745d004b9a5ad9987

SHA512

475c72b5655744be0a900269478d930942cd7aae9ec8acf0e38c1eff2a4c7ec243c91293996ad8288ec2ed9c72b896436bb8e12b67f44b999fc03d1f43db4a2d

TGZ Checksum

Value

MD5

3175d041a4c0a6948d5e23993b7a3bcd

SHA1

1356a57098602623b5cab8511f530aab3b04a080

SHA256

8f576aca2a00533180ed3710971bd9c4c419e275d618c4c745d004b9a5ad9987

SHA512

475c72b5655744be0a900269478d930942cd7aae9ec8acf0e38c1eff2a4c7ec243c91293996ad8288ec2ed9c72b896436bb8e12b67f44b999fc03d1f43db4a2d

Important Information about Upgrading

Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.21.0 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded at least to 1.16.0 before trying to upgrade to 1.21.0. In case you need to do a rollback, this can also ONLY happen back to 1.16.0 or newer. Rolling directly back to an earlier release can result in data loss.

Change Log

  • The default parser kv has been changed from using the parseTimestamp() function to use the findTimestamp() function. This will make it able to parse more timestamp formats. It will still only parse timestamps with a timezone. It also no longer adds a timezone field with the extracted timestamp string. This was only done for parsing the timestamp and not meant for storing on the event. To keep the old functionality, clone the kv parser in the relevant repositories and store the cloned parser with the name kv. This can be done before upgrading to this release. See Built-In kv Parser.

  • kvParse() now unescapes backslashes when they’re inside (' or ") quotes.

  • kvParse() now only unescapes quotes and backslashes that are inside a quoted string.

  • kvParse() now also unescapes single quotes. (')

  • The findTimestamp() function has been changed, so that it no longer has a default value for the timezone parameter. Previously, the default was UTC. If no timezone argument is supplied to the function, it will not parse timestamps that do not contain a timezone. To get the old functionality, simply add timezone=UTC to the function. This can be done before upgrading to this release.

  • The deprecated built-in parser json-for-notifier has been deleted. It has been replaced by the parser json-for-action.

  • The deprecated built-in parser bro-json has been deleted. It has been replaced by the parser zeek-json.

  • The split() function no longer adds a @display field to the event it outputs.

  • Make the thread dump job run on a dedicated thread, rather than running on the thread pool shared with other jobs.

  • Added support for disaster recovery of a cluster where all nodes including Kafka has been lost, restoring the state present in bucket storage as a fresh cluster using the old bucket as read-only, and forming a fresh cluster from that. New Configs: S3_RECOVER_FROM_REPLACE_REGION and S3_RECOVER_FROM_REPLACE_BUCKET to allow modifying names of region/bucket while recovering to allow running on a replica, specifying read-only source using S3_RECOVER_FROM_* for all the bucket storage target parameters otherwise named S3_STORAGE_*

  • Improve hit rate of query state cache by allowing similar but not identical queries to share cache when the entry in the cache can form the basis for both. The cache format is incompatible with previous versions, this is handled internally by handling incompatible cache entries as cache misses.

  • Improve performance of writeJson() a bit.

  • Improve number formatting in certain places by being better at removing trailing zeros.

  • Change handling of groupBy() in live-queries which should in many cases reduce memory cost.

  • The experimental function moment has been removed.

  • subnet() now reports an error if its argument bits is outside the range 0 to 32.

  • The transpose function now reports an error if the arguments header or column is provided together with the argument pivot.

  • The replace() function now reports an error if the arguments replacement and with are provided at the same time.

  • The replace() function now reports an error if an unsupported flag is provided in the flags argument.

  • The functions worldMap() and geohash() now errors if requested precision is greater than 12.

  • When on ephemeral disks, nodes being replaced with new ones on empty disks no longer download most of the segments they had before being replaced, but instead schedule downloads based on is being searched.

  • The Auth0 login page will no longer load a local version of the Auth-Lock library, but instead load a login script hosted on Auth0’s CDN. This may require opening access to https://cdn.auth0.com/ if hosting Humio behind a firewall.

  • Lowered the severity level for some loggings for running alerts.

  • Made loggings for running alerts more consistent and more structured. All loggings regarding a specific alert will contain the keys alertId, alertName and viewId. Loggings regarding the alert query will always contain the key externalQueryId and sometimes also the keys queryId with the internal id and query with the actual query string. If there are problems with the run-as-user, the id of that user is logged with the key user.

  • Made loggings for running scheduled searches more consistent and more structured. All loggings regarding a specific alert will contain the keys scheduledSearchId, scheduledSearchName and viewId. Loggings regarding the alert query will always contain the key externalQueryId and sometimes also the keys queryId with the internal id and query with the actual query string. If there are problems with the run-as-user, the id of that user is logged with the key user.

  • Prevent Humio from booting when Zookeeper has been reset but Kafka has not.

  • Create, update and delete of an alert, scheduled search or action is now recorded in the audit log.

  • Running test of a parser is no longer recorded in the audit log, and irrelevant fields are no longer recorded upon parser deletion.

  • When using filters on dashboards, you can now easily reset the filter, either removing it completely, or using the default filter if one is present.

  • Made sure the humio view humio default parser is only installed when missing, instead of overwriting it every time humio starts.

  • When exporting a package, you now get a preview of the icon you’ve added for the package.

  • Packages can now be updated with the same version but new content. This makes iterating over a package before finalizing it easier.

  • Humio insights package installed if missing on the humio view when humio is started.

  • Raised the parser test character length limit to 20000.

  • Raised the note widget text length limit to 20000.

  • Fixed a performance and a robustness problem with the function unit:convert(). The formatting of the numbers in its output may in some cases be different now.

  • Fixed a number of potential concurrency issues.

  • Fixed a memory leak in rdns() in cases where many different name servers are used.

  • Fixed a bug in parseJson which resulted in failed JSON parsing if an object contained an empty key ("").

  • Fixed a bug which caused eventInternals() to crash if used late in the pipeline.

  • Fixed a bug which caused validation to miss rejecting window() inside window() and session().

  • Fixed a bug which could cause saving of query state cache to take a rather long time.

  • Fixed a bug which could potentially cause a query state cache file to be read in an incomplete state.

  • Fixed a bug in upper() and lower() which could cause its output to be corrupted (in cases where no characters had been changed).

  • Fixed a bug where analysis of a regex could consume extreme amounts of memory.

  • Fixed a bug in lowercase() which caused the case lowercase(field="\*", include="values") to not process all fields but only the field named "\*".

  • Fixed a bug where referenced saved queries were not referenced correctly after exporting them as part of a package.

  • Fixed bugs in format() which caused output from ‘%e’/’%g’ to be incorrect in certain cases.

  • Fixed an issue causing Humio to crash when attempting to delete an idle empty datasource right as the datasource receives new data.

  • Fixed an issue with the validation of the query prefix set on a view for each repository within the view: Invoking macros is not allowed and was correctly rejected when creating a view, but was not rejected when editing an existing connection.

  • Fixed an issue where merge of segments were reported as failed due to input files being deleted while merging. This is not an error, and is no longer reported as such.

  • Fixed an issue where the segment mover might schedule too many segments for transfer at a time.

  • Fixed an issue with lack of escaping in filename when downloading.

  • Fixed an issue causing segment tombstones to potentially be deleted too early if bucket storage is enabled, causing an error log.

  • Fixed an issue causing event redirection to break when using copyEvent to get the same events ingested into multiple repositories.

  • Fixed an issue where repeating queries would not validate in alerts.

  • Fixed an issue where cancelled queries could be cached.