Falcon LogScale 1.76.5 LTS (2023-07-04)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.76.5LTS2023-07-04

Cloud

2024-02-28No1.44.0No

Hide file hashes

Show file hashes

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.76.5/server-1.76.5.tar.gz

These notes include entries from the following previous releases: 1.76.1, 1.76.2, 1.76.3, 1.76.4

Bug fix and updates.

Advance Warning

The following items are due to change in a future release.

  • Configuration

    • Starting from 1.78 release, the default value for the MAX_INGEST_REQUEST_SIZE configuration will be reduced from 1 GB to 32 MB.

      This value limits the size of ingest request and rejects oversized requests.

      If the request is compressed within HTTP, then this restricts the size after decompressing.

Removed

Items that have been removed as of this release.

API

  • Removed the API for managing ingest tokens. This has long been deprecated and replaced by a GraphQL API.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The REST endpoint for testing actions has been deprecated. api/v1/repositories/repoId/alertnotifiers/actionId/test has been deprecated. The new GraphQL mutations should be used instead.

Upgrades

Changes that may occur or be required during an upgrade.

  • Other

    • Java upgraded to 17.0.6 in Docker containers

      Kafka upgraded to 3.3.2 for KAFKA-14379

      Kafka client upgraded to 3.3.2

      Kafka Docker container upgraded to 3.3.2

    • Kafka client has been upgraded to 3.4.0.

      Kafka broker has been upgraded to 3.4.0 in the Kafka container.

      The container upgrade is performed for security reasons to resolve CVE-2022-36944 issue, which Kafka should however not be affected by. If you wish to do a rolling upgrade of your Kafka containers, please always refer to Kafka upgrade guide.

  • Packages

    • Optimizations in package handling require migration of data during upgrade. This migration is performed automatically. Please notice:

      • While the upgrade of cluster nodes are ongoing, we recommend you do not install or update any packages, as they may end up in an inconsistent state.

        If a package ends up in a bad state during migration, it can be fixed simply by reinstalling the package.

      • You will potentially experience that accessing the list of installed packages will fail, and creating new dashboards, alerts, parsers, etc. based on package templates will not work as intended.

        This should only happen during the cluster upgrade, and should resolve itself once the cluster is fully upgraded.

      • If the cluster nodes are downgraded, any packages installed or updated while running the new version will not work, and we therefore recommend uninstalling or downgrading those packages prior to downgrading the cluster nodes.

New features and improvements

  • Security

    • When creating a new group you now have to add the group and add permissions for it in the same multi step dialog.

  • UI Changes

    • Changes have been made for the three-dot menu (⋮) used for Field Interactions:

      • It is now available from the Fields Panel and the Inspection Panel, see Searching Data.

      • Keyboard navigation has been improved.

      • For field interactions with live queries, the Fields Panel flyout will now display a fixed list of top values, keeping the values from the point in time when the menu was opened.

    • Suggestions in Query Editor will show for certain function parameters like time formats.

    • Introduced Search Interactions to add custom event list options for all users in a repository.

      For more information, see Event List Interactions.

    • Event List Interactions are now sorted by name and repository name by default.

    • Tabs on the Users page are renamed: former Groups and Permissions tab is now renamed to Permissions; former Details tab is now renamed to Information. In addition, the Permissions tab is now displayed first — it is also the tab that will be opened by default when navigating to a user from other places in the product. See Manage users & permissions for a description of roles and permissions in the UI.

    • The Search page now supports timezone picking e.g. +02:00 Copenhagen. The timezone will be set on the users' session and remembered between pages.

      For more information, see Setting Time Zone.

    • You can now set your preferred timezone under Manage your Account.

    • Known field names are now shown as completion suggestions in Query Editor while you type.

  • Automation and Alerts

  • GraphQL API

    • GraphQL API mutations have been added for testing actions without having to save them first. The added mutations are:

      • testEmailAction

      • testHumioRepoAction

      • testOpsGenieAction

      • testPagerDutyAction

      • testSlackAction

      • testSlackPostMessageAction

      • testUploadFileAction

      • testVictorOpsAction

      • testWebhookAction

      The previous testAction mutation has been removed.

      The new GraphQL API mutations' signature is almost the same as the create mutation for the same action, except that test actions require event data and a trigger name, as the previous testAction mutation did.

      As a consequence, the Test button is now always enabled in the UI.

  • Configuration

    • The ability to keep the same merge target across digest changes is reintroduced. This feature was reverted in an earlier release due to a discovered issue where mini segments for an active merge target could end up spread across hosts. As that issue has been fixed, mini segments should now be stored on the hosts running digest for the target.

    • A new environment configuration variable GLOB_ALLOW_LIST_EMAIL_ACTIONS is introduced. It enables cluster-wide blocking of recipients of Action Type: Email actions that are not in the provided allow list.

    • New dynamic configuration FlushSegmentsAndGlobalOnShutdown. When set, and when USING_EPHEMERAL_DISKS is set to true, forces all in-progress segments to be closed and uploaded to bucket, and also forces a write (and upload) of global snapshot during shutdown. When not set, this avoids the extra work and thus time shutting down from flushing very recent segments, as those can then be resumed on next boot, assuming that next boot continues on the same Kafka epoch. The default is false, which allows faster shutdown.

  • Dashboards and Widgets

    • The Single Value widget now supports interactions on both the Search and Dashboard page. See Manage Dashboard Interactions for more details on interactions.

    • Introduced Dashboards Interactions to add interactive elements to your dashboards.

      For more information, see Manage Dashboard Interactions.

    • It is now possible to set a temporary timezone in dashboards, which will be read from the URL on page load e.g. tz=Europe/Copenhagen.

      For more information, see Time Interval Settings.

  • Log Collector

  • Functions

  • Other

    • "Sticky" autoshards no longer mean that the system cannot tune their value, but only that it cannot decrease the number of shards; the cluster is allowed to raise the number of shards on datasources when it needs to, also for those that were set as sticky using the REST API.

    • Ephemeral nodes are automatically removed from the cluster if they are offline for too long (2 hours by default).

Fixed in this release

  • Security

    • Verified that LogScale does not use the affected Akka dependency component in CVE-2023-31442 by default, and have taken additional precautions to notify customers.

      For:

      • LogScale Cloud/Falcon Long Term Repository:

        • This CVE does not impact LogScale Cloud or LTR customers.

      • LogScale Self-Hosted:

        • Exposure to risk:

          • Potential risk is only present if a self hosted customer has modified the Akka parameters to a non default value of akka.io.dns.resolver = async-dns during initial setup.

          • By default LogScale does not use this configuration parameter.

          • CrowdStrike has never recommended custom Akka parameters. We recommend using default values for all parameters.

        • Steps to mitigate:

          • Setting akka.io.dns.resolver to default value (inet-address) will mitigate the potential risk.

        • On versions older than 1.92.0:

          • Unset the custom Akka configuration. Refer to Akka documentation for more information on how to unset or pass a different value to the parameter here.

          • CrowdStrike recommends upgrading LogScale to 1.92.x or higher versions.

  • UI Changes

    • Time Selector and date picker in the Time Interval panel have been fixed for issues related to daylight savings time.

    • Fixed an issue that made switching UI theme report an error and only take effect for the current session.

    • Fixed an issue where the dashboard page would freeze when the value of a dashboard parameter was changed.

    • Fixed the UI as it were not showing an error when a query gets blocked due to query quota settings.

    • We have fixed tooltips in the query editor, which were hidden by other elements in the UI.

  • Automation and Alerts

    • For self-hosted: Automation for sending emails from Actions no longer uses the IP filter, allowing administrators not to put Automation on the IP allowlist.

  • GraphQL API

    • Pending deletes that would cause nodes to fail to start, reporting a NullPointerException, have been fixed.

  • Storage

    • Fixing mini-segment fetches as they failed to complete properly during queries, if the number of mini-segments involved was too large.

    • The noise from MiniSegmentMergeLatencyLoggerJob has been reduced by being more conservative about when we log mini segments that are unexpectedly not being merged. We have made MiniSegmentMergeLatencyLoggerJob take datasource idleness into account.

  • API

    • Fixed an issue with API Explorer that could fail to load in some configurations when using cookie authentication.

  • Configuration

    • Nodes are now considered ephemeral only if they set USING_EPHEMERAL_DISKS to true. Previously, they were ephemeral if they either set that configuration, or if they were using the httponly node role.

    • Fixed an issue where the IOC database could get out of sync. The IOC database will be re-downloaded upon upgrade, therefore IOCs won't be completely available for a while after the upgrade.

    • Removed compression type extreme for configuration COMPRESSION_TYPE. Specifying extreme will now select the default value of high in order not to cause configuration errors for clusters that specify extreme. The suggestion is to remove COMPRESSION_TYPE from your configurations unless you specify the only other non-default value of fast.

  • Ingestion

    • We have set a maximum number of events that we will parse under a single timeout so large batches are allowed to take longer. If you've seen parsers time out not because the parser is actually slow but because you were processing many events in a single batch, this change should cause that stop happening. Only parsers that are genuinely slow should now time out.

  • Queries

    • The query scheduling has been fixed as it could hit races with the background recompression of files in a way that resulted in the query missing the file and ending up adding warnings about segment files being missed by the query.

    • Fixed a failing require from MiniSegmentsAsTargetSegmentReader, causing queries to fail in very rare cases.

  • Functions

    • Queries ending with tail() will no longer be rendered with infinite scroll.

  • Other

    • Fixed an issue for the ingest API that made it possible to ingest into system repositories.

    • Fixing mini-segment downloads during queries, as they could cause download retries to fail spuriously, even if the download actually succeeded.

    • Fixed an issue where searching within small subsets of the latest 24 hours in combination with hash filters could result in events that belonged in the time range to not be included in the result. The visible symptom was that narrowing the search span provided more hits.

    • Timeout from publish to global topic in Kafka has been fixed, as it resulted in marking input segments for merge as broken temporarily.