Stable Release

Humio Server 1.51.0 Stable (2022-08-15)

VersionTypeRelease DateEnd of SupportUpgrades FromData MigrationConfig. Changes
ValueJAR Checksum
ValueJAR Checksum

Bug fixes and an updated.


These items have been deprecated and may be removed in a future release:

  • Deprecated "enabledFeatures" query. Use new "featureFlags" query instead.

  • The deprecated REST API for parsers has been removed.

  • Remove the following feature flags and their usage: EnterpriseLogin, OidcDynamicIdpProviders, UsagePage, RequestToActivity, CommunityNewDemoData.

  • The deprecated REST API for actions has been removed, except for the endpoint for testing an action.

Improvements, new features and functionality

  • Falcon Data Replicator

    • Added environment variable FDR_USE_PROXY which makes the fdr job use the proxy settings specified with: HTTP_PROXY_* environment variables.

    • FDR polling is now turned on by default. Whether FDR polling should be turned on or off on a node can be configured using the ENABLE_FDR_POLLING_ON_NODE configuration variable.

      • If an S3 file is found to be incorrectly formatted during FDR ingest, it will not be ingested completely, but an attempt is made to ingest the remaining S3 files of the SQS message.

      • If an S3 file cannot be found during FDR ingest, it will not be ingested, but an attempt is made to ingest the remaining S3 files of the SQS message.

  • UI Changes

    • If Humio fails to start because the cluster is being upgraded, a dedicated message will show when launching the UI.

    • The design of the Time Selector has been updated, and it now features an Apply button on the dashboard page. See Time Interval Settings.

    • The Live checkbox is now no longer checked automatically when changing the value of the time window in the Time Selector. See Expanding Time Frame for details.

    • The toggle switch is now tabbable and can be accessed using the keyboard.

    • New styling of errors on search and dashboard pages.

    • In lists of users, with user avatars containing user initials, the current user would sometimes appear to have an opening parenthesis as their last initial.

    • When editing an email action in the UI and adding multiple recipients, it is now possible to add a space after the comma in the comma-separated list of recipients.

    • Improved keyboard accessibility for creating repositories and views.

    • Field columns now support multiple formatting options. See Columns Formatting Types for details.

    • Adds an icon and a hint to a disabled side navigation menu item that tells the user the reason for it being disabled.

    • The Save As... button is now always displayed on the Search page, see it described at Saving a Search.

    • Add missing accessibility features to the login page.

  • Dashboards and Widgets

    • Note Widget:

      • Default background color is now Auto.

      • Introduced the text color configuration option.

    • Pie Chart Widget now uses the first column for the series as a fall back option.

    • The Pie Chart Widget now uses the first column for the series as a fall back option.

    • Applied stylistic changes for the Inspector Panel used in Widget Editor.

    • Dashboards can now be configured to not update after the initial search has completed. This mode is mainly meant to be used when a dashboard is interactive and not for wall-mounted monitors that should update continually. The feature can be accessed from the Dashboard properties panel when a dashboard is put in edit-mode. See Working in Edit Mode.

    • The widget legend column width is now based on the custom series title (if specified) instead of the original series name.

    • Single Value Widget Configuration: deprecated field use-colorised-thresholds in favor of color-method.

      Single Value Widget Editor: the configuration option Enable Thresholds is being replaced by an option called Method under the Colors section.

    • Introducing the Heat Map Widget that visualizes aggregated data as a colorised grid.

    • Bar Chart Widget:

      • The Y-axis can now start at smaller values than 1 for logarithmic scales, when the data contain small enough values.

      • It now has an Auto setting for the Input Data Format property, see Wide or Long Input Format for details.

      • Now works with bucket query results.

    • Single Value Widget:

      • Missing buckets are now shown as gaps on the sparkline.

      • Isolated data points are now visualised as dots on the sparkline.

    • Table widgets will now break lines for newline characters in columns.

    • The Dashboard page now displays the current cluster status.

    • When importing existing dashboard with a static shared time, recent changes in the time selection would make those dashboards live.

    • Sorting of Pie Chart Widget categories, descending by value. Categories grouped as Others will always be last.

    • The Normalize option for the World Map Widget has been replaced by a third magnitude mode named None, which results in fixed size and opacity for all marks.

    • Added empty states for all widget types that will be rendered when there are no results.

    • Better handling of dashboard connections issues during restarts and upgrades.

  • GraphQL API

    • Added a GraphQL mutation for testing an action. It is still in preview, but it will replace the equivalent REST endpoint soon.

    • Expose a new GraphQL type with feature flag descriptions and whether they are experimental.

    • Introduced new dynamic configuration JoinRowLimit. It can be set using graphQL. JoinRowLimit can be used as an alternative to the environment variable MAX_JOIN_LIMIT. If the JoinRowLimit is set, then its value will be used instead of MAX_JOIN_LIMIT. If it is not set, then MAX_JOIN_LIMIT will be used.

    • Introduced new dynamic configuration StateRowLimit. It can be set using graphQL. StateRowLimit can be used as an alternative to the environment variable MAX_STATE_LIMIT. If StateRowLimit is set, then its value will be used instead of MAX_STATE_LIMIT. If it is not set, then MAX_STATE_LIMIT will be used..

    • Added a new dynamic configuration flag QueryResultRowCountLimit that globally limits how many results (events) a query can return. The default value is either 100000 or the value of the flag StateRowLimit, whichever is largest. This flag can be set by administrators through graphQL.

    • Introduced new dynamic configuration GroupMaxLimit. It can be set using graphQL. GroupMaxLimit sets the maximum value of the limit parameter for groupBy(). Previously, this parameter was bounded by MAX_STATE_SIZE, so if you have made any modifications to this variable, please set GroupMaxLimit to the same value for a seamless upgrade. The default value of GroupMaxLimit is 200000.

    • Added a new dynamic configuration GroupDefaultLimit which sets the default value for the 'limit' parameter of groupBy(), selfJoin() and some other functions. Previously the environment variable MAX_STATE_LIMIT was used to determine the default value. This can be done through GraphQL. GroupDefaultLimit has a default value of either 20000 or the value of the config GroupMaxLimit, whichever is smallest. GroupDefaultLimit cannot be larger than GroupMaxLimit. If you've changed the value of MAX_STATE_LIMIT, we recommend that you also change GroupDefaultLimit and GroupMaxLimit to the same value for a seamless upgrade.

    • Introduced new dynamic configuration LiveQueryMemoryLimit. It can be set using graphQL. LiveQueryMemoryLimit determines how much memory in bytes a live query can consume during its execution. For non-live queries, their memory limit is determined by the QueryMemoryLimit, which is 100MB by default. By default LiveQueryMemoryLimit has the same value as QueryMemoryLimit.

    • The GQL API mutation updateDashboard has been updated to take a new argument `updateFrequency` which can currently only be `NEVER` or `REALTIME`, which correspond respectively to "dashboard where queries are never updated after first completion" and "dashboard where query results are updated indefinitely.

    • Improved error messaging of GraphQL queries and mutations for alerts, scheduled searches and actions in cases where a given repository or view cannot be found.

    • Added preview fields isClusterBeingUpdated and minimumNodeVersion to the GraphQL Cluster object type.

    • Introduced new dynamic configuration QueryMemoryLimit. It can be set using graphQL. QueryMemoryLimit determines how much memory in bytes a static query can consume during its execution. QueryMemoryLimit replaces the environment variable MAX_MEMORY_FOR_REDUCE, so if you have changed the value of MAX_MEMORY_FOR_REDUCE, please use QueryMemoryLimit now instead. QueryMemoryLimit defaults to 100MB. See also LiveQueryMemoryLimit for live queries.

  • Documentation

    • All documentation links have been updated after the documentation site has been restructured. Please contact support, if you experience any broken links.

  • Configuration

    • Default value of configuration variable S3_ARCHIVING_WORKERCOUNT raised from 1 to (vCPU/4).

    • Improve the error message if Humio is configured to use bucket storage, but the credentials for the bucket are not configured.

    • New file format for files uploaded to bucket storage that allows files larger than 2GB to be written to bucket storage. This may be turned on by setting the DynamicConfig BucketStorageWriteVersion to "3". When creating a new Humio clusters, the new format is the default. The new format is supported only on Humio version 1.41+.

    • New configurations BUCKET_STORAGE_SSE_COMPATIBLE that makes bucket storage not verify checksums of raw objects after uploading to an S3. This option is turned on automatically is KMS is enabled (see S3_STORAGE_KMS_KEY_ARN) but is available directly here for use with other S3 compatible providers where verfying even content length does not work.

      Mini segments usually get merged if their event timestamps span more than MAX_HOURS_SEGMENT_OPEN. Mini segments created as part of backfilling did not follow this rule, but will now get merged if their ingest timestamps span more than MAX_HOURS_SEGMENT_OPEN.

    • Add environment variable EULA_URL to specificy url for terms and conditions.

    • Adds a new metric for measuring the merge latency, which is defined as the latency between the last minisegment being written in a sequence with the same merge target, and those minisegments being merged. The metric name is segment-merge-latency-ms.

    • Change default value for configuration AUTOSHARDING_MAX from 16 to 128.

    • Detect need for higher autoshard count by monitoring ingest request flow in the cluster. Dynamically increase the number of autoshards for each datasource to keep flow on each resulting shard below approximately 2MB/s. New DynamicConfig for this that sets the target maximum rate of ingest for each shard of a datasource: TargetMaxRateForDatasource. Default value is 2000000 (2 MB).

    • Added a link to humio-activity repository for debugging IDP configurations to the page for setting up the same.

    • Added a new environment variable GLOB_MATCH_LIMIT which sets the maximum number of rows for csv_file in match(..., file=csv_file, glob=true). Previously MAX_STATE_SIZE was used to determine this limit. The default value of this variable is 20000. If you've changed the value of MAX_STATE_SIZE, we recommend that you also change GLOB_MATCH_LIMIT to the same value for a seamless upgrade.

    • Adds a new logger job that logs the age of an unmerged miniSegment if the age exceeds the threshold set by the env variable MINI_SEGMENT_MAX_MERGE_DELAY_MS_BEFORE_WARNING. The default value of MINI_SEGMENT_MAX_MERGE_DELAY_MS_BEFORE_WARNING is 2 x MAX_HOURS_SEGMENT_OPEN. MAX_HOURS_SEGMENT_OPEN defaults to 24 hours. The error log produced looks like: Oldest unmerged miniSegment is older than the threshold thresholdMs={value} miniSegmentAgeMs={value} segment={value}.

    • Bucket storage now has support for a new format for the keys (file names) for the files placed in the bucket. When the new format is applied, listing of files only happens for the prefixes "tmp/" and "globalsnapshots/". This help products such a "HCP". The new format is applied only to buckets created after the DynamicConfig BucketStorageKeySchemeVersion has been set to "2". Existing cluster can start using the new format for new files by setting the DynamicConfig. The change will take effect after restarting the cluster. When creating a new Humio clusters, the new format is the default. The new format is supported only on Humio version 1.41+.

    • Support for KMS on S3 bucket for Bucket Storage. Specify full ARN of the key. The key_id is persisted in the internal BucketEntity so that a later change of the ID of the key to use for uploads will make Humio still refer the old keyID when downloading files uploaded using the previous key. Setting a new value for the target key results in a fresh internal bucket entity to track which files used kms and which did not. For simplicity it is recommended to not mix KMS and non-KMS configurations on the same S3 bucket.

      • New configuration variable S3_STORAGE_KMS_KEY_ARN that specifies the KMS key to use.

      • New configuration variable S3_STORAGE_2_KMS_KEY_ARN for 2nd bucket key.

      • New configuration variable S3_RECOVER_FROM_KMS_KEY_ARN for recovery bucket key.

  • Log Collector

    • The Log Collector download page has been enabled for on-prem deployments.

  • Functions

    • Added validation to the field and key parameters of the join() function, so empty lists will be rejected with a meaningful error message.

    • Improved the phrasing of the warning shown when groupBy() exceeds the max or default limit.

    • Added validation to the field parameter of the kvParse() function, so empty lists will be rejected with a meaningful error message.

  • Other

    • Fixed a bug where an alert with name longer than 50 characters could not be edited.

    • Added a log message with the maximum state size seen by the live part of live queries.

    • All feature flags now contains a textual description about what features are hidden behind the flag.

    • Adds a logger job for cluster management stats it log the stats every 2 minutes, which makes them searchable in Humio.

      The logs belong to the class c.h.c.ClusterManagementStatsLoggerJob, logs for all segments contains globalSegmentStats log about singular segments starts with segmentStats.

    • All users will not have access to the audit log or search all view by default anymore. Access can be granted with permissions.

    • Added a log of the approximate query result size before transmission to the frontend, captured by the approximateResultBeforeSerialization key.

    • Add warning when a multitenancy user is changing data retention on an unlimited repository.

    • Added detection and handling of all queries being blocked during Humio upgrades.

    • Improved performance of NDJSON format in S3 Archiving.

    • Remove remains of default groups and roles. The concept was replaced with UserRoles.

    • Java in the docker images no longer has the cap_net_bind_service capability and thus Humio cannot bind directly to privileged ports when running as a non-root user.

    • Humio now logs digest partition assignments regularly. The logs can be found using the query class=*DigestLeadershipLoggerJob*.

    • Compute next set of Prometheus metrics only in a single thread concurrently. If more requests arrive, then the next request gets the previous response.

    • Added a new action type that creates a CSV file from the query result and uploads it to Humio to be used with the match() query function. See Upload File.

    • Added a log line for when a query exceeds its allotted memory quota.

    • When unregistering a node from a cluster, return a validation error if it is still alive. Hosts should be shut down before attempting to remove them from the cluster. This validation can be skipped using the same accept-data-loss parameter that also disables other validations for the unregistration endpoint.

    • Fixed an issue where query auto-completion sometimes wouldn't show the documentation for the suggested functions.

    • Fix a bug that could cause Humio to spuriously log errors warning about segments not being merged for datasources doing backfilling.

    • Fix an unhandled IO exception from TempDirUsageJob. The consequence of the uncaught exception was only noise in the error log.

    • Adds a new metric for the temp disk usage. The metric name is temp-disk-usage-bytes and denotes how many bytes are used.

    • Make BucketStorageUploadJob only log at info level rather than error if a segment upload fails because the segment has been removed from the host. This can happen if node X tries to upload a segment, but node Y beats it to the punch. Node X may then choose to remove its copy before the upload completes.

    • Include the requester in logs from QuerySessions when a live query is restarted or cancelled.

    • Fix a bug causing Humio's digest coordinator to allow nodes to take over digest without catching up to the current leader. This could cause the new leader to replay more data from Kafka than necessary.

    • The audit log system repository on Cloud has been replaced with a view, so that dashboards etc. can be created on top of audit log data.

    • Make a number of improvements to the digest partition coordinator. The coordinator now tries harder to avoid assigning digest to nodes that are not caught up on fetching segments from the other nodes. It also does a better job unassigning digest from dead nodes in edge cases.

    • Bump the version of the Monaco code editor.

    • Add flag whether a feature is experimental.

    • Streaming queries that fail to validate now return a message of why validation failed.

    • The referrer meta tag for Humio has been changed from no-referrer to same-origin.

Bug Fixes

  • Falcon Data Replicator

    • FDR Ingest will no longer fail on events that are larger than the maximum allowed event size. Instead, such messages will be truncated.

  • UI Changes

    • Prevent the UI showing errors for smaller connection issues while restarting.

    • Intermediate network issues are not reported immediately as an error in the UI.

    • Cloud: Updated the layout for license key page.

    • Fix the dropdown menus closing too early on the home page.

    • Websocket connections are now kept open when transitioning pages, and are used more efficiently for syntax highlighting.

    • Fixed a bug where the "=" and "/=" buttons did not appear on cells in the event list where they should.

    • When viewing the events behind e.g. a Time Chart, the events will now only display with the @timestamp and @rawstring columns.

  • Dashboards and Widgets

    • The Time Chart Widget regression line is no longer affected by the interpolation setting.

    • The theme toggle on a shared dashboard was moved to the header panel and no longer overlaps with any widgets.

  • GraphQL API

    • Fix the assets GraphQL query in organizations with views that are not 1-to-1 linked.

  • Configuration

    • Fixed an issue where event forwarding still showed as beta.

    • Fixed a bug that could result in merging small ("undersized") segments even if the resulting segment would then have a wider than desired time span. The goal it to not produce segments that span more than the 10% of the retention setting for time for the repository. If no time-based retention is configured on the repository, then 3 times the value of configuration variable MAX_HOURS_SEGMENT_OPEN is applied as limit. For default settings, that results in 72 hours.

    • Fixed an issue where delete events from a minisegment could result in the merge of those minisegments into the resulting target segment never got executed.

      Index in block needs reading from blockwriter before adding each item.

      Fixed a bug where the @id field of events in live query were off by one.

  • Functions

    • Fixed a bug where using eval as an argument to a function would result in a confusing error message.

    • Fixed a bug where ioc:lookup() would sometimes give incorrect results when negated.

    • Revised some of the error messages and warnings regarding join() and selfJoin().

    • Fixed a bug where the writeJson() function would write any field starting with a case-insensitive inf or infinity prefix as a null value in the resulting JSON.

  • Other

    • Fix an issue that could rarely cause exceptions to be thrown from Segments.originalBytesWritten, causing noise in the log.

    • In case view is not found we will try to fixup the cache on all cluster nodes.

    • Fix type in Unregisters node text on cluster admin UI.

    • Fixed an issue where strings like Nana and Information could be interpreted as NaN (not-a-number) and infinity, respectively.

    • Fixed an issue where some warnings would show twice.

    • Fix a bug where changing a role for a user under a repository would trigger infinite network requests.

    • Fix an issue causing the event forwarding feature to incorrectly reject topic names that contained a dash (-).

    • Humio will now clean up its tmp directories by deleting all "humiotmp" directories in the data directory when terminating gracefully.

      Fix a regression in the launcher script causing JVM_LOG_DIR to not be evaluated relative to the Humio base install path. All paths in the launcher script should now be relative to the base install path, which is the directory containing the bin folder.

      Fix a bug that could cause merge targets to be cached indefinitely if the associated minis had their mergeTarget unset. The effect was a minor memory leak.

      Fix a bug that could cause Humio to attempt to merge minisegments from one datasource into a segment in another datasource, causing an error to be thrown.

      When configuring thread priorities, Humio will no longer attempt to call the native setpriority function. It will instead only call the Java API for setting thread priority.

    • Upgrade Kafka to 3.2.0 in the docker images, and in the Humio dependencies.

    • Update Netty to address CVE-2022-24823.

    • Fix performance issue for users with access to many views.

    • Fixed a bug where multiline comments weren't always highlighted correctly.

    • Fix a bug causing digesters to continue digesting even if the local disk is full. The digester will now pause digesting and error log if this occurs.

    • Update org.json:json to address a vulnerability that could cause stack overflows.

    • Fix an issue causing Humio to create a large number of temporary directories in the data directory.

    • Centralise decision for names of files in bucket, allow more than one variant.

      Improved hover messages for strings.

    • Fixed an issue where JSON parsing on ingest and in the query language was inefficient for large JSON objects.

    • Fix a bug that could cause a NumberFormatException to be thrown from ZookeeperStatsClient.

    • Fix response entities not being discarded in error cases for the proxyqueryjobs endpoint, which could cause warnings in the log.

    • Some errors messages wrongly pointed to the beginning of the query.

    • Bump javax.el to address CVE-2021-28170.

    • Bump woodstox to address SNYK-JAVA-COMFASTERXMLWOODSTOX-2928754.

    • Fixed an issue where query auto-completion would sometimes delete existing parentheses.

    • If a segment is deleted or otherwise disappears from global while Humio is attempting to upload it to bucket storage, the upload will now be dropped with an info-level log, rather than requeued with an error log.

    • Fixes the placement of a confirmation dialog when attempting to change retention.

    • Make streaming queries search segments newest-to-oldest rather than oldest-to-newest. Streaming queries do not ensure the order of exported events anyway, and searching newest-to-oldest is more efficient.

      • Fixed an issue where event forwarder properties were not properly validated.

      • Reduced the timeout used when testing event forwarders in order to get a better error when timeouts happen.

    • Improve file path handling in DiskSpaceJob to eliminate edge cases where the job might not have been able to tell if a file was on primary or secondary storage.