Humio Server 1.21.0 GA (2021-02-22)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.21.0 | GA | 2021-02-22 | Cloud | 2022-03-31 | No | 1.16.0 | No |
JAR Checksum | Value |
---|---|
MD5 | 3175d041a4c0a6948d5e23993b7a3bcd |
SHA1 | 1356a57098602623b5cab8511f530aab3b04a080 |
SHA256 | 8f576aca2a00533180ed3710971bd9c4c419e275d618c4c745d004b9a5ad9987 |
SHA512 | 475c72b5655744be0a900269478d930942cd7aae9ec8acf0e38c1eff2a4c7ec243c91293996ad8288ec2ed9c72b896436bb8e12b67f44b999fc03d1f43db4a2d |
Important Information about Upgrading
Beginning with version 1.17.0, if your current version of Humio is not directly able to upgrade to the new version, you will get an error if you attempt to start up the incompatible version. The 1.21.0 release is only compatible with Humio release 1.16.0 and newer. This means that you will have to ensure that you have upgraded at least to 1.16.0 before trying to upgrade to 1.21.0. In case you need to do a rollback, this can also ONLY happen back to 1.16.0 or newer. Rolling directly back to an earlier release can result in data loss.
Removed
Items that have been removed as of this release.
Other
The deprecated built-in parser bro-json has been deleted. It has been replaced by the parser zeek-json.
The deprecated built-in parser json-for-notifier has been deleted. It has been replaced by the parser json-for-action.
Fixed in this release
Automation and Alerts
Create, update and delete of an alert, scheduled search or action is now recorded in the audit log.
Functions
Fixed a bug in
lowercase()
which caused the caselowercase(field="\*", include="values")
to not process all fields but only the field named"\*"
.Fixed a bug which caused validation to miss rejecting window() inside
window()
andsession()
.subnet()
now reports an error if its argumentbits
is outside the range 0 to 32.The
replace()
function now reports an error if the argumentsreplacement
andwith
are provided at the same time.The
split()
function no longer adds a @display field to the event it outputs.The
replace()
function now reports an error if an unsupported flag is provided in theflags
argument.Change handling of
groupBy()
in live-queries which should in many cases reduce memory cost.The functions
worldMap()
andgeohash()
now generated errors if the requested precision is greater than 12.Fixed a memory leak in
rdns()
in cases where many different name servers are used.Fixed a bug which caused
eventInternals()
to crash if used late in the pipeline.The
transpose()
function now reports an error if the argumentsheader
orcolumn
is provided together with the argumentpivot
.Fixed bugs in
format()
which caused output from%e
and%g
to be incorrect in certain cases.Fixed a performance and a robustness problem with the function
unit:convert()
. The formatting of the numbers in its output may in some cases be different now.The
findTimestamp()
function has been changed, so that it no longer has a default value for thetimezone
parameter. Previously, the default wasUTC
. If no timezone argument is supplied to the function, it will not parse timestamps that do not contain a timezone. To get the old functionality, simply addtimezone=UTC
to the function. This can be done before upgrading to this release.The experimental function
moment()
has been removed.
Other
Humio insights package installed if missing on the humio view when humio is started.
Fixed an issue causing event redirection to break when using copyEvent to get the same events ingested into multiple repositories.
Raised the note widget text length limit to .00.
kvParse()
now unescapes backslashes when they're inside ('
or"
) quotes.Fixed an issue where repeating queries would not validate in alerts.
Make the thread dump job run on a dedicated thread, rather than running on the thread pool shared with other jobs.
Fixed an issue with lack of escaping in filename when downloading.
Running test of a parser is no longer recorded in the audit log, and irrelevant fields are no longer recorded upon parser deletion.
Made loggings for running alerts more consistent and more structured. All loggings regarding a specific alert will contain the keys
alertId
,alertName
andviewId
. Loggings regarding the alert query will always contain the keyexternalQueryId
and sometimes also the keysqueryId
with the internal id andquery
with the actual query string. If there are problems with the run-as-user, the id of that user is logged with the keyuser
.Fixed a bug where analysis of a regex could consume extreme amounts of memory.
Raised the parser test character length limit to .00.
Fixed an issue where the segment mover might schedule too many segments for transfer at a time.
Fixed a number of potential concurrency issues.
Fixed an issue causing Humio to crash when attempting to delete an idle empty datasource right as the datasource receives new data.
Made sure the humio view humio default parser is only installed when missing, instead of overwriting it every time humio starts.
Prevent Humio from booting when Zookeeper has been reset but Kafka has not.
Improve number formatting in certain places by being better at removing trailing zeros.
Lowered the severity level for some loggings for running alerts.
Fixed a bug where referenced saved queries were not referenced correctly after exporting them as part of a package.
kvParse()
now also unescapes single quotes. ('
)Improve hit rate of query state cache by allowing similar but not identical queries to share cache when the entry in the cache can form the basis for both. The cache format is incompatible with previous versions, this is handled internally by handling incompatible cache entries as cache misses.
Fixed a bug which could cause saving of query state cache to take a rather long time.
The default parser
kv
has been changed from using theparseemacs vTimestamp()
function to use thefindTimestamp()
function. This will make it able to parse more timestamp formats. It will still only parse timestamps with a timezone. It also no longer adds atimezone
field with the extracted timestamp string. This was only done for parsing the timestamp and not meant for storing on the event. To keep the old functionality, clone thekv
parser in the relevant repositories and store the cloned parser with the namekv
. This can be done before upgrading to this release. See kv.Fixed an issue with the validation of the query prefix set on a view for each repository within the view: Invoking macros is not allowed and was correctly rejected when creating a view, but was not rejected when editing an existing connection.
Fixed a bug which could potentially cause a query state cache file to be read in an incomplete state.
Fixed a bug in parseJson which resulted in failed JSON parsing if an object contained an empty key (
""
).Improve performance of
writeJson()
a bit.When using filters on dashboards, you can now easily reset the filter, either removing it completely, or using the default filter if one is present.
Fixed an issue causing segment tombstones to potentially be deleted too early if bucket storage is enabled, causing an error log.
Made loggings for running scheduled searches more consistent and more structured. All loggings regarding a specific alert will contain the keys
scheduledSearchId
,scheduledSearchName
andviewId
. Loggings regarding the alert query will always contain the keyexternalQueryId
and sometimes also the keysqueryId
with the internal id andquery
with the actual query string. If there are problems with the run-as-user, the id of that user is logged with the keyuser
.Fixed an issue where cancelled queries could be cached.
Fixed a bug in
upper()
andlower()
which could cause its output to be corrupted (in cases where no characters had been changed).Fixed an issue where merge of segments were reported as failed due to input files being deleted while merging. This is not an error, and is no longer reported as such.
kvParse()
now only unescapes quotes and backslashes that are inside a quoted string.Added support for disaster recovery of a cluster where all nodes including Kafka has been lost, restoring the state present in bucket storage as a fresh cluster using the old bucket as read-only, and forming a fresh cluster from that. New Configs:
S3_RECOVER_FROM_REPLACE_REGION
andS3_RECOVER_FROM_REPLACE_BUCKET
to allow modifying names of region/bucket while recovering to allow running on a replica, specifying read-only source usingS3_RECOVER_FROM*
for all the bucket storage target parameters otherwise namedS3_STORAGE
*When using ephemeral disks on nodes are being replaced with new ones on empty disks no longer download most of the segments they had before being replaced, but instead schedule downloads based on is being searched.
The Auth0 login page will no longer load a local version of the Auth-Lock library, but instead load a login script hosted on Auth0's CDN. This may require opening access to
https://cdn.auth0.com/
if hosting Humio behind a firewall.
Packages
When exporting a package, you now get a preview of the icon you've added for the package.
Packages can now be updated with the same version but new content. This makes iterating over a package before finalizing it easier.