Falcon LogScale 1.145.0 Preview (2024-07-02)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

JDK

Compatibility?

Config.

Changes?
1.145.0Preview2024-07-02

Cloud

Next StableYes1.11221-22No

Bug fixes and updates.

Breaking Changes

The following items create a breaking change in the behavior, response or operation of this release.

  • Functions

    • Calling the match() function with multiple columns now finds the last matching row in the file. This now aligns with the behavior of calling the same function with a single column.

      For more information, see match().

Advanced Warning

The following items are due to change in a future release.

  • Installation and Deployment

    • The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The -XX:ActiveProcessorCount=n command-line option will be ignored if set. Users that need to configure the core count manually should set CORES=n environmant variable instead. This will cause the launcher to configure both LogScale and the JVM properly.

      This change is scheduled for 1.148.0.

      For more information, see Configuring Available CPU Cores.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The following API endpoints are deprecated and marked for removal in 1.148.0:

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment

    • GET /api/v1/clusterconfig/kafka-queues/partition-assignment

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults

    The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.

  • The server.tar.gz release artifact has been deprecated. Users should switch to the OS/architecture-specific server-linux_x64.tar.gz or server-alpine_x64.tar.gz, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.

    We are making this change for the following reasons:

    • By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.

    • Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.

    • By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.

    The last release where server.tar.gz artifact is included will be 1.154.0.

  • The HUMIO_JVM_ARGS environment variable in the LogScale Launcher Script script will be removed in 1.154.0.

    The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Override garbage collection configuration within the launcher script.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

  • The lastScheduledSearch field from the ScheduledSearch datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to the ScheduledSearch datatype to replace lastScheduledSearch.

New features and improvements

  • UI Changes

    • When a file is referenced in a query, the Search page now shows a new tab next to the Results and Events tabs, bearing the name of the uploaded file. Activating the file tab will fetch the contents of the file and will show them as a Table widget. Alternatively, if the file cannot be queried, a download link will be presented instead.

      For more information, see Creating a File.

    • When exporting data to CSV, the Export to File dialog now offers the ability to select field names that are suggested based on the query results, or to select all fields in one click.

      For more information, see Exporting Data.

  • Automation and Alerts

  • GraphQL API

    • The new startFromDateTime argument has been added to s3ConfigureArchiving GraphQL mutation. When set, S3Archiving does not consider segment files that have a start time that is before this point in time. This in particular allows enabling S3 archiving only from a point in time and going forward, without archiving all the older files too.

  • Configuration

    • A new dynamic configuration variable GraphQlDirectivesAmountLimit has been added to restrict how many GraphQL directives can be in a query. Valid values are integers from 5 to 1,000. The default value is 25.

  • Functions

    • The new query function text:contains() is introduced. The function tests if a specific substring is present within a given string. It takes two arguments: string and substring, both of which can be provided as plain text, field values, or results of an expression.

      For more information, see text:contains().

    • The new query function array:append() is introduced, used to append one or more values to an existing array, or to create a new array.

      For more information, see array:append().

Fixed in this release

  • UI Changes

    • A long list of large queries would break the queries' list appearing under the Recent tab by not being updatable. The limit to recent queries has now been set to 30.

      For more information, see Recalling Queries.

    • It was not possible to sort by columns other than ID in the Cluster nodes table under the Operations UI menu. This issue has now been fixed.

    • The dialog to quickly switch to another repository would open when pressing the undo hotkey on Windows machines. This wrong behavior has now been fixed.

  • Automation and Alerts

    • The read-only alert page would wrongly report that actions were being throttled when a filter alert had disabled throttling. This issue has now been fixed.

    • Actions would show up as scheduled searches and vice versa when viewing the contents of a package. This issue has now been fixed.

  • Storage

    • Notifying to Global Database about file changes could be slow. This issue has now been fixed.

  • GraphQL API

    • The background processing underlying the redactEvents() mutation would fail if the filter included tags. This error has now been fixed.

  • Dashboards and Widgets

    • Arguments for parameters no longer used in a deleted query could be submitted anyway when invoking a saved query that uses the same arguments, thus generating an error. This issue has now been fixed.

  • Ingestion

    • A wrong order of the output events for parsers have been fixed — the output now returns the correct event order.

Improvement

  • Storage

    • The global topic throughput has been improved for particular updates to segments in datasources with many segments.

      For more information, see Global Database.

    • Let segment merge span vary by +/- 10% of the configured value to avoid all segment targets switching to a new merge targets at the same point in time.