Humio Server 1.36.0 LTS (2022-01-31)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.36.0 | LTS | 2022-01-31 | Cloud | 2023-01-31 | No | 1.26.0 | Yes |
JAR Checksum | Value |
---|---|
MD5 | 87cdf81b183bf1a065596181b873d813 |
SHA1 | 8425487e7e3b566ed2eae42cac4cbb6010eb1825 |
SHA256 | 76af04b53a689411f4048e743e0491cb437b99503630a87cf5aba2eb0281231f |
SHA512 | 9dca89bf4097b1c4949c458b9010fff12edd25248c2f0f2e18835466e852d6c5fa6981c8763125b769c5a6673bd00a1a385bfb22f7241aeeadf9e826ef208c99 |
Docker Image | SHA256 Checksum |
---|---|
humio-core | 2641650964190056ac10ad0225b712c3a01d844bf2c5f517663187d45adf846c |
kafka | 99e3a00c93308aa92a8363c65644748d6ace602c1c6e425dcfc32be12432dee7 |
zookeeper | 45c911346e3b58501e1a1b264c178debd33edd692cd901dd9e87cbcd2f93e60a |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.36.0/server-1.36.0.tar.gz
Beta: Bucket storage support for dual targets
Support for dual targets to allow using one as the preferred
download and the other to trust for durability. One example of
this is to save on cost (on traffic) by using a local bucket
implementation, such as
MinIO, in the local
datacenter as the preferred bucket storage target, while using a
remote Amazon S3
bucket as the trusted bucket for durability. If the local
MinIO bucket is lost (or just
not responding for a while) the Humio cluster still works using
the AWS S3 bucket with no
reconfiguration or restart required. Configuration of the second
bucket is via configuration entries similar to the existing
STORAGE
keys, but using
the prefix STORAGE_2
for
the extra bucket.
When using dual targets, bucket storage backends may need
different proxy configurations for each backend - or not. The
new configuration
BUCKET_STORAGE_MULTIPLE_ENDPOINTS
(default
false
) controls whether
the proxy configuration in the environment is applied to all
bucket storage backends. When set to
true
, each bucket
preserves the active proxy/endpoint configuration and a change
to those will trigger creation of a fresh internally persisted
bucket storage access configuration.
New features and improvements
UI Changes
New feature to select text in the search page event list and include/exclude that in the search query.
Improved dark mode toggle button's accessibility.
Disable the option to creating a view if the user does not have Connect a view permission on any repository. This is more intuitive than getting an empty dropdown of repositories to choose from.
Improved accessibility when choosing a theme.
Allow more dialogs in the UI to be closed with the
Esc
key.Added ability to resize search page query field by dragging or fitting to query.
Time Selector is now accessible by keyboard.
Hovering over text within a query now shows the result of interpreting escape characters.
New dialogs for creation of parsers and dashboards.
GraphQL API
Improved the error messages when the GraphQL queries SearchDomain.alert, SearchDomain.action, and SearchDomain.savedQuery do not find the entity with the given ID.
Configuration
Added the config
CORS_ALLOWED_ORIGINS
a comma separated list for CORS allowed origins, default allows all origins.Added
INITIAL_FEATURE_FLAGS
which lets you enable/disable feature flags on startup. For instance, settingINITIAL_FEATURE_FLAGS=+UserRoles,-UsagePage
Enables
UserRoles
and disablesUsagePage
.New configuration
BUCKET_STORAGE_MULTIPLE_ENDPOINTS
and many configurations usingSTORAGE_2
as prefix. See Bucket Storage.Make
ZOOKEEPER_URL
optional. When not set, the zookeeper-status-logger job does not run, and the cluster administration page does not display information about a Zookeeper cluster.When using
ZOOKEEPER_URL_FOR_NODE_UUID
for assignment of node ID to Humio nodes, and value ofZOOKEEPER_PREFIX_FOR_NODE_UUID
(default/humio_autouuid
) does not match contents of localUUID
file, acquire a fresh nodeuuid
.
Functions
Added job which will periodically run a query and record how long it took. By default the query is
count()
.Added a limit parameter to the
fieldstats()
function. This parameter limits the number of fields to include in the result.
Other
Added option to specify an IP Filter for which addresses hostname verification should not be made.
Added granular IP Filter support for shared dashboards (BETA - API only).
Added analytics on query language feature use to the audit-log under the fields
queryParserMetrics
.Allow the query scheduler to enqueue segments and
aux
files for download from bucket storage more regularly. This should ensure that queries fetching smallaux
files can more reliably keep the download job busy.Remove caching of API calls to prevent caching of potential sensitive data.
Added exceptions to the Humio logs from
AlertJob
andScheduledSearchJob
.Added ability to override max auto shard count for a specific repository.
Improved the default permissions on the group page by leaving their view expanded once the user cancels update.
Allow the same view name across organizations.
Improved caching of UI static assets.
Improved the error message when an ingest request times out.
Added warning logs when errors are rendered to browser during oAuth flows.
Added a job that scans segments which are waiting to be archived, this value is recorded in the metric:
s3-archiving-latency-max
.Improved Humio's detection of Kafka resets. We now load the Kafka cluster id once on boot. If it changes after that, the node will crash.
Improved usability of the groups page.
Fixed in this release
UI Changes
For HTTP Event Collector (HEC) the input field
sourcetype
is now also stored in@sourcetype
.Remove
script-src: unsafe-eval
from content security policy.Removed a spurious warning log when requesting a non-existent
hash
file from S3.The action message templates
{events_str}
and{query_result_summary}
always evaluate to the same string. To reflect this, the UI has been updated so that these templates are combined into the same item in the template overview for Email, Slack and Webhook actions.Fixed an issue where the
SegmentMoverJob
could delete the local copy of a segment, if a pending download of the segment failed the CRC check. The job will now keep the downloaded file at a temporary path until the CRC check completes, to avoid deleting a local copy created by other jobs, e.g. by bucket downloads.The query endpoint API now supports languageVersion for specifying Humio query language versions.
Fixed a compatibility issue with Filebeat 7.16.0.
Make writes to Kafka's chatter topic block in a similar manner as writes to global.
Fixed an issue where
top
would fail if the sum of the values exceeded 2^63-1. Exceeding sums are now pegged to 2^63-1.When bootstrapping a new cluster, set the cluster version in global right away. Since nodes will not boot on a snapshot that doesn't specify a cluster version, it is important that this field exists in all snapshots.
Reenable a feature to make Humio delete local copies of bucketed segments, even if they are involved in a query.
Fixed an issue where repeating queries could cause other queries to fail.
Fixed an issue in the
Table
widget. It will no longer insert 0-values for missing fields in integer columns. Empty fields will be shown consistently, independent of the column data type.The
/hec
endpoint no longer responds toOPTIONS
requests saying it supportsGET
requests. It doesn't and never has.Fixed an issue where choosing a UI theme would not get saved properly in the user's settings.
Make Humio handle missing
aux
files a little faster when downloading segments from bucket storage.Fixed a race condition that could cause Humio to delete more segments than expected when initializing a digester node.
Fixed an issue in the Export to file dialog on the search page. It is now possible to export fields with spaces.
The
repository/.../query
endpoint now returns a status code of.0 (BadRequest)
when given an invalid query in some cases where previously it returned503 (ServiceUnavailable)
.Fixed an issue where the Humio query URLs sent by actions would land users on the search page in editing mode for the alert or scheduled search that had triggered. Now, they still land on the search page, but not in editing mode.
Fixed a race condition that could cause digesters to calculate two different offsets during startup when determining where to start consuming, and which partially written segments to discard, which could lead to data loss when partially written segments were replayed from Kafka.
Queries on views no longer restart when the ordering of the view's connections is changed.
Code completion in the query editor now also works on the right hand side of
:=
.Fixed an issue where MaxMind databases would only update if a license was present at startup and not if it was added later.
Fixed
session()
such that it works when events arrive out of time order.Fixed an issue that repeatedly tried to restart live queries from a given user upon the deletion of the user.
Fixed an issue where live queries would sometimes double-count parts of the historic data.
When interacting with the REST API for files, errors now have detailed error messages.
Fixed an issue where, if a custom parser was overriding a built-in parser, then the custom parser could accidentally be overwritten by creating a new parser with the same name.
From the alerts overview and the scheduled searches overview, it is now possible to clear the error status on an alert or a scheduled search.
Errors on alerts are now cleared more granularly. Errors when starting the alert query are cleared as soon as the query is successfully started, errors from polling the query are cleared when the query is successfully polled, and errors from invoking actions are cleared when at least one action has been successfully triggered.
Reduce noise in the log when the bucket storage upload job attempts to upload a file that is deleted concurrently.
Errors on scheduled searches are now cleared more granularly. Errors when starting a query are cleared as soon as another query is successfully started, errors from polling a query are cleared when a query is successfully polled, and errors from invoking actions are cleared when at least one action has been successfully triggered.
No longer allow organization- and system-level ingest tokens to ingest into sandbox and system repos.
Reenable a feature to make Humio fetch and check
hash
files from bucket storage before fetching the segments.No longer allow requests to
/hec
to specify organizations by name. We now only accept IDs.SAML and OIDC only - During signout, Humio background tabs will be redirected to a signout landing page instead of to the login page.
Humio now tries to avoid interrupting threads during shutdown, instead allowing them to finish their work. This should reduce log noise when shutting down.
The
AlertJob
andScheduledSearchJob
now only log validation errors from running the queries as warnings, previously, some of these were logged as errors.Fixed an issue where nodes could request partitions from the query partitioning table that were not present.
Fixed an issue where queries of the form #someTagField != someValue ... would sometimes produce incorrect results.
When starting ingest, Humio checks that the computed starting position in Kafka is below the Kafka end offset. Ensure that the end offset is requested after the starting position is computed, not before. This might prevent a very rare spurious boot failure.
Fixed Humio always reading and discarding an already processed message from the ingest queue on boot.
Fixed a number of instability issues in the query scheduler. The scheduler should now more reliably ensure that each query either completes, or is cancelled.
Bumped the Humio Docker containers to Java 17. If you manually set any
--add-opens
flags in your JVM config, you should remove them. The container should set the right flags automatically.Fixed an issue where the digest coordinator could consider a host to be alive if the coordinator hadn't seen any timestamps from that host.
When creating ingest and chatter topic, reduce desired max.message.bytes to what Kafka cluster allows, if that is lower than our desired values.
Queries
Query partition tables updates are now rejected if written by a node that is no longer the cluster leader.