Falcon LogScale 1.76.1 Stable (2023-02-27)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

JDK

Compatibility?

Req. Data

Migration

Config.

Changes?
1.76.1Stable2023-02-27

Cloud

On-Prem

2024-02-28No1.44.011NoNo
TAR ChecksumValue
MD55c03162eebeb9c4fe028bce4140da4d9
SHA156459772d2c7f5c2d21be6650d473bfee0893ab1
SHA25604c067f721cb6a3bf3e74ce10d2bda8a12a3ede05c6b181af8a074a430321bfc
SHA51271e330f43a0825c70bf0fd8b1c3c82cedc554aab87316a906a72c813697c930be58eef2bffe049ff28f13bd0d4b44700e8fd7a54e796724acdf4c063c5c4508c
Docker ImageSHA256 Checksum
humioe4a730e769cb84cea8be642eb352763e6596caa249a95857de9052cc4b83ddb4
humio-core148d662610e09163ce581487ebdec4519960e9f332473b100cd3c6466d52943b
kafkac717b3b0c5087cb746bde5381419bf5cc31532a1756f2463ff31477374b89a4a
zookeeper2b228e05f97e8946c323fa40060102de41210e5e38733ffb3dd0b353259c37d3

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.76.1/server-1.76.1.tar.gz

Bug fixes and updates.

Advanced Warning

The following items are due to change in a future release.

  • Configuration

    • Starting from 1.78 release, the default value for the MAX_INGEST_REQUEST_SIZE configuration will be reduced from 1 GB to 32 MB.

      This value limits the size of ingest request and rejects oversized requests.

      If the request is compressed within HTTP, then this restricts the size after decompressing.

Removed

Items that have been removed as of this release.

API

  • Removed the API for managing ingest tokens. This has long been deprecated and replaced by a GraphQL API.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The REST endpoint for testing actions has been deprecated. api/v1/repositories/repoId/alertnotifiers/actionId/test has been deprecated. The new GraphQL mutations should be used instead.

Upgrades

Changes that may occur or be required during an upgrade.

  • Other

    • Java upgraded to 17.0.6 in Docker containers

      Kafka upgraded to 3.3.2 for KAFKA-14379

      Kafka client upgraded to 3.3.2

      Kafka Docker container upgraded to 3.3.2

  • Packages

    • Optimizations in package handling require migration of data during upgrade. This migration is performed automatically. Please notice:

      • While the upgrade of cluster nodes are ongoing, we recommend you do not install or update any packages, as they may end up in an inconsistent state.

        If a package ends up in a bad state during migration, it can be fixed simply by reinstalling the package.

      • You will potentially experience that accessing the list of installed packages will fail, and creating new dashboards, alerts, parsers, etc. based on package templates will not work as intended.

        This should only happen during the cluster upgrade, and should resolve itself once the cluster is fully upgraded.

      • If the cluster nodes are downgraded, any packages installed or updated while running the new version will not work, and we therefore recommend uninstalling or downgrading those packages prior to downgrading the cluster nodes.

Improvements, new features and functionality

  • UI Changes

    • Changes have been made for the three-dot menu (⋮) used for Field Interactions:

      • It is now available from the Fields Panel and the Inspection Panel, see Searching Data.

      • Keyboard navigation has been improved.

      • For field interactions with live queries, the Fields Panel flyout will now display a fixed list of top values, keeping the values from the point in time when the menu was opened.

    • Introduced Search Interactions to add custom event list options for all users in a repository.

      For more information, see Event List Interactions.

    • Event List Interactions are now sorted by name and repository name by default.

    • Tabs on the Users page are renamed: former Groups and Permissions tab is now renamed to Permissions; former Details tab is now renamed to Information. In addition, the Permissions tab is now displayed first — it is also the tab that will be opened by default when navigating to a user from other places in the product. See Managing Users & Permissions for a description of roles and permissions in the UI.

    • The Search page now supports timezone picking e.g. +02:00 Copenhagen. The timezone will be set on the users' session and remembered between pages.

      For more information, see Setting Time Zone.

    • You can now set your preferred timezone under Manage your Account.

    • Suggestions in Search Box will show for certains function parameters like time formats.

    • Known field names are now shown as completion suggestions in Search Box while you type.

  • Automation and Alerts

  • GraphQL API

    • GraphQL API mutations have been added for testing actions without having to save them first. The added mutations are:

      • testEmailAction

      • testHumioRepoAction

      • testOpsGenieAction

      • testPagerDutyAction

      • testSlackAction

      • testSlackPostMessageAction

      • testUploadFileAction

      • testVictorOpsAction

      • testWebhookAction

      The previous testAction mutation has been removed.

      The new GraphQL API mutations' signature is almost the same as the create mutation for the same action, except that test actions require event data and a trigger name, as the previous testAction mutation did.

      As a consequence, the Test button is now always enabled in the UI.

  • Configuration

    • The ability to keep the same merge target across digest changes is reintroduced. This feature was reverted in an earlier release due to a discovered issue where mini segments for an active merge target could end up spread across hosts. As that issue has been fixed, mini segments should now be stored on the hosts running digest for the target.

    • A new environment configuration variable GLOB_ALLOW_LIST_EMAIL_ACTIONS is introduced. It enables cluster-wide blocking of recipients of Action Type: Email actions that are not in the provided allow list.

    • New dynamic configuration FlushSegmentsAndGlobalOnShutdown. When set, and when USING_EPHEMERAL_DISKS is set to true, forces all in-progress segments to be closed and uploaded to bucket, and also forces a write (and upload) of global snapshot during shutdown. When not set, this avoids the extra work and thus time shutting down from flushing very recent segments, as those can then be resumed on next boot, assuming that next boot continues on the same Kafka epoch. The default is false, which allows faster shutdown.

  • Dashboards and Widgets

    • It is now possible to set a temporary timezone in dashboards, which will be read from the URL on page load e.g. tz=Europe%2FCopenhagen.

      For more information, see Time Interval Settings.

    • The Single Value widget now supports interactions on both the Search and Dashboard page. See Managing Dashboard Interactions for more details on interactions.

    • Introduced Dashboards Interactions to add interactive elements to your dashboards.

      For more information, see Managing Dashboard Interactions.

  • Log Collector

  • Functions

  • Other

    • "Sticky" autoshards no longer mean that the system cannot tune their value, but only that it cannot decrease the number of shards; the cluster is allowed to raise the number of shards on datasources when it needs to, also for those that were set as sticky using the REST API.

    • When creating a new group you now have to add the group and add permissions for it in the same multi step dialog.

    • Ephemeral nodes are automatically removed from the cluster if they are offline for too long (2 hours by default).

Bug Fixes

  • UI Changes

    • Fixed an issue that made switching UI theme report an error and only take effect for the current session.

    • Fixed an issue where the dashboard page would freeze when the value of a dashboard parameter was changed.

    • Fixed the UI as it were not showing an error when a query gets blocked due to query quota settings.

    • We have fixed tooltips in the query editor, which were hidden by other elements in the UI.

  • Automation and Alerts

  • GraphQL API

    • Pending deletes that would cause nodes to fail to start, reporting a NullPointerException, have been fixed.

  • Configuration

    • Fixed an issue where the IOC database could get out of sync. The IOC database will be re-downloaded upon upgrade, therefore IOCs won't be completely available for a while after the upgrade.

    • Removed compression type extreme for configuration COMPRESSION_TYPE. Specifying extreme will now select the default value of high in order not to cause configuration errors for clusters that specify extreme. The suggestion is to remove COMPRESSION_TYPE from your configurations unless you specify the only other non-default value of fast.

  • Functions

    • Queries ending with tail() will no longer be rendered with infinite scroll.

  • Other

    • Fixed a failing require from MiniSegmentsAsTargetSegmentReader, causing queries to fail in very rare cases.

    • Fixed an issue for the ingest API that made it possible to ingest into system repositories.

    • Nodes are now considered ephemeral only if they set USING_EPHEMERAL_DISKS to true. Previously, they were ephemeral if they either set that configuration, or if they were using the httponly node role.

    • Fixing minisegment downloads during queries, as they could cause download retries to fail spuriously, even if the download actually succeeded.

    • We have set a maximum number of events that we will parse under a single timeout so large batches are allowed to take longer. If you've seen parsers time out not because the parser is actually slow but because you were processing many events in a single batch, this change should cause that stop happening. Only parsers that are genuinely slow should now time out.

    • Fixed an issue where the query scheduling could hit races with the background recompression of files in a way that resulted in the query missing the file and ending up adding warnings about segment files being missed by the query.

    • Timeout from publish to global topic in Kafka has been fixed, as it resulted in marking input segments for merge as broken temporarily.

    • Fixing minisegment fetches as they failed to complete properly during queries, if the number of minisegments involved was too large.

    • We have reduced the noise from MiniSegmentMergeLatencyLoggerJob by being more conservative about when we log mini segments that are unexpectedly not being merged. We have made MiniSegmentMergeLatencyLoggerJob take datasource idleness into account.