Humio Server 1.35.0 Preview (2022-01-17)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

JDK

Compatibility?

Req. Data

Migration

Config.

Changes?
1.35.0Preview2022-01-17

Cloud

On-Prem

2023-01-31No1.26.011NoYes
JAR ChecksumValue
MD583b164ea22cbc8a347ee22a4e49d87fb
SHA1cde673dc3026cf8455e980a7f1ffc17de2a92072
SHA256a43b1f86fe2f610eadaacfcff50264de578be56959125d24d27e51e1eacc403d
SHA512a0897d2b8acb1ad888c1f0128bc46a9f3a425cc87adc40192ea3089533f247ce0ddff23971b1298d58cdb9f0e93f1bb24ae3392587de3a313fa4bbd89a32747b
Docker ImageSHA256 Checksum
humio2f6d1b42b5d2d519bd0152bf6e28952a8f5d9e711bfc6c8c50fb58d917d033f0
humio-core56193b2add6ece05561e058ffcd1706989b9633cb1da857b26582df7d7bf210a
kafka149bce1bfa2e9c3e8eb6ff3d8c9416fad058b392134cab5958bf405f2553d264
zookeeper9393097554e28403372ef38f87eab200292a41f75d8d38a8d102f926e7919eae

Beta: Bucket storage support for dual targets

Support for dual targets to allow using one as the preferred download and the other to trust for durability. One example of this is to save on cost (on traffic) by using a local bucket implementation, such as MinIO, in the local datacenter as the preferred bucket storage target, while using a remote Amazon S3 bucket as the trusted bucket for durability. If the local MinIO bucket is lost (or just not responding for a while) the Humio cluster still works using the AWS S3 bucket with no reconfiguration or restart required. Configuration of the second bucket is via configuration entries similar to the existing STORAGE keys, but using the prefix STORAGE_2 for the extra bucket.

When using dual targets, bucket storage backends may need different proxy configurations for each backend - or not. The new configuration BUCKET_STORAGE_MULTIPLE_ENDPOINTS (default false) controls whether the proxy configuration in the environment is applied to all bucket storage backends. When set to true, each bucket preserves the active proxy/endpoint configuration and a change to those will trigger creation of a fresh internally persisted bucket storage access configuration.

Improvements, new features and functionality

  • UI Changes

    • Added ability to resize search page query field by dragging or fitting to query.

    • Allow more dialogs in the UI to be closed with the Esc key.

    • New dialogs for creation of parsers and dashboards.

    • Improved accessibility when choosing a theme.

    • New feature to select text in the search page event list and include/exclude that in the search query.

    • Time Selector is now accessible by keyboard.

    • Improved dark mode toggle button's accessibility.

    • Disable the option to creating a view if the user does not have Connect a view permission on any repository. This is more intuitive than getting an empty dropdown of repositories to choose from.

    • Hovering over text within a query now shows the result of interpreting escape characters.

  • GraphQL API

    • Improved the error messages when the GraphQL queries SearchDomain.alert, SearchDomain.action, and SearchDomain.savedQuery do not find the entity with the given ID.

  • Configuration

    • Added INITIAL_FEATURE_FLAGS which lets you enable/disable feature flags on startup. For instance, setting

      INITIAL_FEATURE_FLAGS=+UserRoles,-UsagePage enables

      UserRoles and disables UsagePage.

    • When using ZOOKEEPER_URL_FOR_NODE_UUID for assignment of node ID to Humio nodes, and value of ZOOKEEPER_PREFIX_FOR_NODE_UUID (default /humio_autouuid) does not match contents of local UUID file, acquire a fresh node uuid.

    • New configuration BUCKET_STORAGE_MULTIPLE_ENDPOINTS and many configurations using STORAGE_2 as prefix. See Bucket Storage

    • Reduce default value of INGESTQUEUE_COMPRESSION_LEVEL, the ingest queue compression level from 1 to 0. This reduces time spent compressing before inserting into the ingest queue by roughly 4x at the expense of a 10-20% increase in size required in Kafka for the ingest queue topic.

    • Make ZOOKEEPER_URL optional. When not set, the zookeeper-status-logger job does not run, and the cluster administration page does not display information about a Zookeeper cluster.

  • Functions

    • Added a limit parameter to the fieldstats() function. This parameter limits the number of fields to include in the result.

  • Other

    • Allow the same view name across organizations.

    • Improved usability of the groups page.

    • Allow the query scheduler to enqueue segments and aux files for download from bucket storage more regularly. This should ensure that queries fetching small aux files can more reliably keep the download job busy.

    • Improved the default permissions on the group page by leaving their view expanded once the user cancels update.

    • Improved the error message when an ingest request times out.

    • Added granular IP Filter support for shared dashboards (BETA - API only).

    • Added ability to override max auto shard count for a specific repository.

    • Added exceptions to the Humio logs from AlertJob and ScheduledSearchJob.

    • Added a job that scans segments which are waiting to be archived, this value is recorded in the metric:s3-archiving-latency-max.

    • Added warning logs when errors are rendered to browser during oAuth flows.

    • Improved Humio's detection of Kafka resets. We now load the Kafka cluster id once on boot. If it changes after that, the node will crash.

    • Added job which will periodically run a query and record how long it took. By default the query is count().

    • Added option to specify an IP Filter for which addresses hostname verification should not be made.

    • Added analytics on query language feature use to the audit-log under the fields queryParserMetrics.

Bug Fixes

  • UI Changes

    • Fixed session() function such that it works when events arrive out of time order.

    • Fixed an issue in the Export to file dialog on the search page. It is now possible to export fields with spaces.

    • Fixed a compatibility issue with Filebeat 7.16.0.

    • Fixed a number of instability issues in the query scheduler. The scheduler should now more reliably ensure that each query either completes, or is cancelled.

    • Fixed an issue where the digest coordinator could consider a host to be alive if the coordinator hadn't seen any timestamps from that host.

    • Fixed an issue where live queries would sometimes double-count parts of the historic data.

    • Fixed an issue where the Humio query URLs sent by actions would land users on the search page in editing mode for the alert or scheduled search that had triggered. Now, they still land on the search page, but not in editing mode.

    • Remove script-src: unsafe-eval from content security policy.

    • Removed a spurious warning log when requesting a non-existent hash file from S3.

    • Errors on scheduled searches are now cleared more granularly. Errors when starting a query are cleared as soon as another query is successfully started, errors from polling a query are cleared when a query is successfully polled, and errors from invoking actions are cleared when at least one action has been successfully triggered.

    • Queries on views no longer restart when the ordering of the view's connections is changed.

    • When starting ingest, Humio checks that the computed starting position in Kafka is below the Kafka end offset. Ensure that the end offset is requested after the starting position is computed, not before. This might prevent a very rare spurious boot failure.

    • Errors on alerts are now cleared more granularly. Errors when starting the alert query are cleared as soon as the query is successfully started, errors from polling the query are cleared when the query is successfully polled, and errors from invoking actions are cleared when at least one action has been successfully triggered.

    • Fixed an issue where, if a custom parser was overriding a built-in parser, then the custom parser could accidentally be overwritten by creating a new parser with the same name.

    • The /hec endpoint no longer responds to OPTIONS requests saying it supports GET requests. It doesn't and never has.

    • Humio now tries to avoid interrupting threads during shutdown, instead allowing them to finish their work. This should reduce log noise when shutting down.

    • Reduce noise in the log when the bucket storage upload job attempts to upload a file that is deleted concurrently.

    • The action message templates {events_str} and {query_result_summary} always evaluate to the same string. To reflect this, the UI has been updated so that these templates are combined into the same item in the template overview for Email, Slack and Webhook actions.

    • Fixed an issue where nodes could request partitions from the query partitioning table that were not present.

    • Make writes to Kafka's chatter topic block in a similar manner as writes to global.

    • Fixed an issue where repeating queries could cause other queries to fail.

    • From the alerts overview and the scheduled searches overview, it is now possible to clear the error status on an alert or a scheduled search.

    • Fixed an issue in the Table widget. It will no longer insert 0-values for missing fields in integer columns. Empty fields will be shown consistently, independent of the column data type.

    • Bumped the Humio Docker containers to Java 17. If you manually set any --add-opens flags in your JVM config, you should remove them. The container should set the right flags automatically.

    • The AlertJob and ScheduledSearchJob now only log validation errors from running the queries as warnings, previously, some of these were logged as errors.

    • SAML and OIDC only - During signout, Humio background tabs will be redirected to a signout landing page instead of to the login page.

    • Fixed an issue that repeatedly tried to restart live queries from a given user upon the deletion of the user.

    • Query partition tables updates are now rejected if written by a node that is no longer the cluster leader.

    • No longer allow requests to /hec to specify organizations by name. We now only accept IDs.

    • Fixed an issue where choosing a UI theme would not get saved properly in the user's settings.

    • Fixed a race condition that could cause digesters to calculate two different offsets during startup when determining where to start consuming, and which partially written segments to discard, which could lead to data loss when partially written segments were replayed from Kafka.

    • When bootstrapping a new cluster, set the cluster version in global right away. Since nodes will not boot on a snapshot that doesn't specify a cluster version, it is important that this field exists in all snapshots.

    • Fixed Humio always reading and discarding an already processed message from the ingest queue on boot.

    • For HTTP Event Collector (HEC) the input field sourcetype is now also stored in @sourcetype.

    • Fixed an issue where MaxMind databases would only update if a license was present at startup and not if it was added later.

    • Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.

    • Reenable a feature to make Humio delete local copies of bucketed segments, even if they are involved in a query.

    • Code completion in the query editor now also works on the right hand side of :=.

    • No longer allow organization- and system-level ingest tokens to ingest into sandbox and system repos.

    • The query endpoint API now supports languageVersion for specifying Humio query language versions.

    • Fixed an issue where the SegmentMoverJob could delete the local copy of a segment, if a pending download of the segment failed the CRC check. The job will now keep the downloaded file at a temporary path until the CRC check completes, to avoid deleting a local copy created by other jobs, e.g. by bucket downloads.

    • The repository/.../query endpoint now returns a status code of .0 (BadRequest) when given an invalid query in some cases where previously it returned 503 (ServiceUnavailable).

    • Reenable a feature to make Humio fetch and check hash files from bucket storage before fetching the segments.

    • When creating ingest and chatter topic, reduce desired max.message.bytes to what Kafka cluster allows, if that is lower than our desired values.

    • Make Humio handle missing aux files a little faster when downloading segments from bucket storage.

    • Fixed an issue where top would fail if the sum of the values exceeded 2^63-1. Exceeding sums are now pegged to 2^63-1.