Falcon LogScale 1.153.1 LTS (2024-09-18)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.153.1LTS2024-09-18

Cloud

2025-09-30Yes1.112No
TAR ChecksumValue
MD55780ffe21c92f8fa122d7eeb30136cb2
SHA138ae38917c6fbb3a5c9820e7f1dbc97a687c876c
SHA256a28139539a3a9ee3c851fdeeed88167d2707d70a621813594ac0502cfc609a04
SHA512db367ed6483b118c34ebfc6c732e809e54a352266c8b1a8a23dc5cee41043048ecc3041d32b4f09c2987f5f702e49a3d512dcf2f16350c0adac6a4390845c1c5
Docker ImageIncluded JDKSHA256 Checksum
humio2238801e6d339cfc288ccf58fb694e9e0e4882763773393e6c5940501f5c9987dc
humio-core224b3a9fbe1d7de1e0e1048a73e82191984b74e33ac023dda3bec0ec5418b76a1a
kafka22ffdb1580b5f5d17746757f8f8ff3f18d2286713a7d13da8ac21ed576677be826
zookeeper224126d016a2c432cb76278ee0e7368d93df7a9304cad08d19fa5ae3334872fc0a

Download

Bug fixes and updates.

Breaking Changes

The following items create a breaking change in the behavior, response or operation of this release.

  • Functions

    • Calling the match() function with multiple columns now finds the last matching row in the file. This now aligns with the behavior of calling the same function with a single column.

      For more information, see match().

Removed

Items that have been removed as of this release.

Installation and Deployment

  • The previously deprecated jar distribution of LogScale (e.g. server-1.117.jar) is no longer published starting from this version. For more information, see Falcon LogScale 1.130.0 GA (2024-03-19).

  • The previously deprecated humio/kafka and humio/zookeeper Docker images are now removed and no longer published.

API

  • The following previously deprecated KAFKA API endpoints have been removed:

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment

    • GET /api/v1/clusterconfig/kafka-queues/partition-assignment

    • POST /api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults

    • GET /api/v1/clusterconfig/kafka-queues/partition-assignment/id

Configuration

Other

  • Unnecessary digest-coordinator-changes and desired-digest-coordinator-changes metrics have been removed. Instead, the logging in the IngestPartitionCoordinator class has been improved, to allow monitoring of when reassignment of desired and current digesters happens — by searching for Wrote changes to desired digest partitions / Wrote changes to current digest partitions.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The server.tar.gz release artifact has been deprecated. Users should switch to the OS/architecture-specific server-linux_x64.tar.gz or server-alpine_x64.tar.gz, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.

    We are making this change for the following reasons:

    • By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.

    • Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.

    • By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.

    The last release where server.tar.gz artifact is included will be 1.154.0.

  • The lastScheduledSearch field from the ScheduledSearch datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to the ScheduledSearch datatype to replace lastScheduledSearch.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Installation and Deployment

    • The default cleanup.policy for the transientChatter-events topic has been switched from compact to delete,compact. This change will not apply to existing clusters. Changing this setting to delete,compact via Kafka's command line tools is particularly recommended if transientChatter is taking up excessive space on disk, whereas it is less relevant in production environments where Kafka's disks tend to be large.

  • Automation and Alerts

    • Aggregate and filter alert types now both display an Error (red) status if starting the alert query times out after 1 minute.

      For more information on alert statuses, see Monitoring Alerts.

  • Configuration

    • When global publish to Kafka times out from digester threads, the system would initiate a failure shutdown. Instead, from this 1.144 version the system retries the publish to Global Database indefinitely for those specific global transactions that originate in a digester thread. If retries occur, these get logged with an error executeTransactionRetryingOnTimeout: unable to execute transaction for global, retrying.

    • Autoshards no longer respond to ingest delay by default, and now support round-robin instead.

  • Ingestion

    • Reduced the waiting time for redactEvents background jobs to complete.

      The background job will not complete until all minisegments affected by the redaction have been merged into full segments. The job was pessimistically waiting for MAX_HOURS_SEGMENT_OPEN (30 days) before attempting the rewrite. This has been changed to wait for FLUSH_BLOCK_SECONDS (15 minutes) before attempting the rewrite, this means, while some minisegments may not be rewritten for 30 days, it is uncommon. If a rewrite is attempted and encounters minisegments, it is postponed and retried later.

      For more information, see Redact Events API.

  • Functions

    • Prior to LogScale v1.147, the array:length() function accepted a value in the array argument that did not contain brackets [ ] so that array:length("field") would always produce the result 0 (since there was no field named field). The function has now been updated to properly throw an exception if given a non-array field name in the array argument. Therefore, the function now requires the given array name to have [ ] brackets, since it only works on array fields.

Upgrades

Changes that may occur or be required during an upgrade.

  • Installation and Deployment

    • The minimum version of Java compatible with LogScale is now 21. Docker users, and users installing the release artifacts that bundle the JDK, are not affected.

      It is recommended to switch to the release artifacts that bundle a JDK, because LogScale no longer supports bringing your own JDK as of release 1.138, see Falcon LogScale 1.138.0 GA (2024-05-14)

New features and improvements

  • Security

    • When extending Retention span or size, any segments that were marked for deletion — but where the files remain in the system — are automatically resurrected. How much data you reclaim via this depends on the backupAfterMillis configuration on the repository.

      For more information, see Audit Logging.

  • Installation and Deployment

    • The Docker containers have been configured to use the following environment variable values internally: DIRECTORY=/data/humio-data HUMIO_AUDITLOG_DIR=/data/logs HUMIO_DEBUGLOG_DIR=/data/logs JVM_LOG_DIR=/data/logs JVM_TMP_DIR=/data/humio-data/jvm-tmp This configuration replaces the following chains of internal symlinks, which have been removed: /app/humio/humio/humio-data -> /app/humio/humio-data /app/humio/humio-data -> /data/humio-data /app/humio/humio/logs -> /app/humio/logs /app/humio/logs -> /data/logs This change is intended for allowing the tool scripts in /app/humio/humio/bin to work correctly, as they were previously failing due to the presence of dangling symlinks when invoked via docker run if nothing was mounted at /data.

  • UI Changes

    • The Time Interval panel now displays the @ingesttimestamp/@timestamp options selected when querying events for Aggregate Alerts.

      For more information, see Changing Time Interval.

    • A new timestamp column has been added in the Event list displaying the alert timestamp selected (@ingesttimestamp or @timestamp). This will show as the new default column along with the usual @rawstring field column.

      For more information, see Alert Properties.

    • The Users page has been redesigned so that the Repository and view roles are displayed in a right hand side panel which opens when a repository or view is selected. The repository and views roles panel shows the roles that give permissions to the user for the selected repository or view, together with groups that apply to them and the corresponding query prefixes.

      For more information, see Manage Users.

    • LogScale administrators can now set the default timezone for their users.

      For more information, see Setting Time Zone.

    • When a file is referenced in a query, the Search page now shows a new tab next to the Results and Events tabs, bearing the name of the uploaded file. Activating the file tab will fetch the contents of the file and will show them as a Table widget. Alternatively, if the file cannot be queried, a download link will be presented instead.

      For more information, see Creating a File.

    • When exporting data to CSV, the Export to File dialog now offers the ability to select field names that are suggested based on the query results, or to select all fields in one click.

      For more information, see Exporting Data.

    • Sections can now be created inside dashboards, allowing for grouping relevant content together to maintain a clean and organized layout, making it easier for users to find and analyze related information. Sections can contain data visualizations as well as Parameter Panels. Additionally, they offer more flexibility when using the Time Selector, enabling users to apply a time setting across multiple widgets.

      For more information, see Sections.

    • An organization administrator can now update a user's role on a repository or view from the Users page.

      For more information, see Manage User Roles.

    • The design of the file editor for Lookup Files has been improved. The editor is now also more responsive and has support for tab navigation.

    • The Client type item in the Query details tab has been removed. Previously, Dashboard was incorrectly displayed as the value for both live dashboard and alert query types.

      For more information, see Query Monitor — Query Details.

    • In Organization settings, layout changes have been made to the Groups page for viewing and updating repository and view permissions on a group.

    • UI workflow updates have been made in the Groups page for managing permissions and roles.

      For more information, see Manage Groups.

  • Automation and Alerts

    • A maximum limit of 1 week has been added on the throttle period for Filter Alerts and Standard Alerts. Any existing alert with a higher throttle time will continue to run, but when edited, lowering the throttle time to 1 week at most will be required.

    • Standard Alerts have been renamed to Legacy Alerts. It is recommended using Filter Alerts or Aggregate Alerts alerts instead of legacy alerts.

      For more information, see Alerts.

    • The {action_invocation_id} message template has been added: it contains a unique id for the invocation of the action that can be correlated with the activity logs.

      For more information, see Message Templates and Variables, Monitoring Alert Execution through the humio-activity Repository.

    • It is no longer possible to use @id as throttle field in filter alerts, as this has no effect. Any existing filter alerts with @id as throttle field will continue to run, but the next time the filter alert is updated, the throttle field must be changed or removed.

      For more information, see Field-Based Throttling.

    • Audit logs for Alerts and Scheduled Searches now contain the package, if installed from a package.

    • A new Disabled actions status is added and can be visible from the Alerts overview table. This status will be displayed when there is an alert (or scheduled search) with only disabled actions attached.

      For more information, see Alerts Overview.

    • Audit logs for Filter Alerts now contain the language version of the alert query.

    • A new aggregate alert type is introduced. The aggregate alert is now the recommended alert type for any queries containing aggregate functions. Like filter alerts, aggregate alerts use ingest timestamps and run back-to-back searches, guaranteeing at least once delivery to the actions for more robust results, even in case of ingest delays of up to 24 hours.

      For more information, see Aggregate Alerts.

    • The following adjustments have been made for Scheduled PDF Reports:

      • If the feature is disabled for the cluster, then the Scheduled reports menu item under Automation will not show.

      • If the feature is disabled or the render service is in an error state, users who are granted with the ChangeScheduledReport permission and try to access, will be presented with a banner on the Scheduled reports overview page.

      • The permissions overview in the UI now informs that the feature must be enabled and configured correctly for the cluster, in order for the ChangeScheduledReport permission to have any effect.

    • The following UI changes are introduced for alerts:

      • The Alerts overview page now presents a table with search and filtering options.

      • An alert-specific version of the Search page is now available for creating and refining your query before saving it as an alert.

      • The alert's properties are opened in a side panel when creating or editing an alert.

      • In the side panel, the recommended alert type to choose is suggested based on the query.

      • For aggregate alerts, the side panel allows you to select the timestamp (@ingesttimestamp or @timestamp).

      For more information, see Creating Alerts, Alert Properties.

    • Users can now see warnings and errors associated to alerts in the Alerts page opened in read-only mode.

  • Storage

    • The size of the queue for segments being uploaded to bucket storage has been increased. This reduces how often a scan global for changes is needed.

      For more information, see Bucket Storage.

    • An alternative S3 client is now available and enabled by default. It handles file uploads more efficiently, by setting the Content-MD5 header during upload thus allowing S3 to perform file validation instead of having LogScale do it via post-upload validation steps. This form of validation should work for all uploads, including when server-side encryption is enabled. The new S3 client only supports this validation mode, so setting the following variables will have no effect:

      In case of issues, the S3 client can be disabled by setting USE_AWS_SDK=false, which will set LogScale back to the previous default client. Should you need to do this, please reach out to Support to have the issue addressed, because the previous client will be deprecated and removed eventually.

    • Support for bucket storage upload validation has changed. LogScale now supports the following three validation modes:

      • Checking the ETag HTTP response header on the upload response. This mode is the default, and can be opted out of via the BUCKET_STORAGE_IGNORE_ETAG_UPLOAD configuration parameter.

      • Checking the ETag HTTP response header on a HEAD request done for the uploaded file. This is the second preferred mode, and can be opted out of via the BUCKET_STORAGE_IGNORE_ETAG_AFTER_UPLOAD configuration parameter.

      • Downloading the file that was uploaded, in order to validate the checksum file. This mode is enabled if neither of the other modes are enabled.

      Previous validation modes that did not compare checksums have been removed, as they were not reliable indicators of the uploaded file integrity.

    • For better efficiency, more than one object is now deleted from Bucket Storage per request to S3 in order to reduce the number of requests to S3.

    • Support is implemented for returning a result over 1GB in size on the queryjobs endpoint. There is now a limit on the size of 8GB of the returned result. The limits on state sizes for queries remain unaltered, so the effect of this change is that some queries that previously failed returning their results due to reaching 1GB, even though the query completed, now work.

  • GraphQL API

    • The new environmentVariableUsage() GraphQL API has been introduced for listing non-secret environment variables used by a node. This is intended as an aid to help do configuration discovery when managing a large number of LogScale clusters.

    • The new concatenateQueries() GraphQL API has been introduced for programmatically concatenating multiple queries into one. This is intended to eliminate errors that might occur if queries are combined naively.

    • The getFileContent() and newFile() GraphQL endpoint responses will change for empty files. The return type is still UploadedFileSnapshot!, but the lines field will be changed to return [] when the file is empty. Previously, the return value was a list containing an empty list [[]]. This change applies both for empty files, and when the provided filter string doesn't match any rows in the file.

    • The preview tag has been removed from the following GraphQL mutations:

      • createAwsS3SqsIngestFeed

      • DeleteIngestFeed

      • resetQuota

      • testAwsS3SqsIngestFeed

      • triggerPollIngestFeed

      • updateAwsS3SqsIngestFeed

    • The log line containing Executed GraphQL query in the humio repository, that is logged for every GraphQL call, now contains the name of the mutations and queries that are executed.

    • The new startFromDateTime argument has been added to s3ConfigureArchiving GraphQL mutation. When set, S3Archiving does not consider segment files that have a start time that is before this point in time. This in particular allows enabling S3 archiving only from a point in time and going forward, without archiving all the older files too.

    • The stopStreamingQueries() GraphQL mutation is no longer in preview.

    • The getFileContent()GraphQL query will now filter CSV file rows case insensitively and allow partial text matches. This happens when filterString input argument is provided. This makes it possible to search for rows without knowing the full column values, and while ignoring the case.

    • The defaultTimeZone GraphQL field on the UserSettings GraphQL type no longer defaults to the organisation default time zone if the user has no default time zone set. To get the default organization time zone through the API, use the defaultTimeZone field on the OrganizationConfigs GraphQL type.

    • A new field named searchUsers has been added on the group() output type in graphql, which is used to search users in the group. The field also allows for pagination, ordering and sorting of the result set.

  • API

  • Configuration

    • A new dynamic configuration variable GraphQlDirectivesAmountLimit has been added to restrict how many GraphQL directives can be in a query. Valid values are integers from 5 to 1,000. The default value is 25.

    • Adjusted launcher script handling of the CORES environment variable:

      If CORES is set, the launcher will now pass -XX:ActiveProcessorCount=$CORES to the JVM. If CORES is not set, the launcher will pass -XX:ActiveProcessorCount to the JVM with a value determined by the launcher. This ensures that the core count configured for LogScale is always same as the core count configured for internal JVM thread pools.

      -XX:ActiveProcessorCount will be ignored if passed directly via other environment variables, such as HUMIO_OPTS. Administrators currently configuring their clusters this way should remove -XX:ActiveProcessorCount from their variables and set CORES instead.

    • The QueryBacktrackingLimit feature is now enabled by default. The default value for the max number of backtracks (number of times a single event can be processed) a query can do has been reduced to 2,000.

    • The default retention.bytes has been modified for global topic from 1 GB to 20 GB. This is applied only when the topic is being created by LogScale initially. For existing clusters you should raise retention on the global topic so that it has room for at least a few hours of flow. This is only relevant for large clusters, as small clusters do not produce enough to exceed 1 GB per few hours. It is ideal to have room for at least 1 day in the global topic for better resilience against large spikes in traffic combined with losing global snapshot files.

    • Cluster-wide configuration of S3 Archiving is introduced, in addition to the existing repo-specific configurations. This feature allows the cluster admin to setup archiving to a (single) bucket for a subset of repositories on the cluster, fully independent of the S3 Archiving available to end users via the UI. This feature adds the following new configuration parameters:

      • S3_CLUSTERWIDE_ARCHIVING_ACCESSKEY (required)

      • S3_CLUSTERWIDE_ARCHIVING_SECRETKEY (required)

      • S3_CLUSTERWIDE_ARCHIVING_REGION (required)

      • S3_CLUSTERWIDE_ARCHIVING_BUCKET (required)

      • S3_CLUSTERWIDE_ARCHIVING_PREFIX (defaults to empty string)

      • S3_CLUSTERWIDE_ARCHIVING_PATH_STYLE_ACCESS(default is false)

      • S3_CLUSTERWIDE_ARCHIVING_KMS_KEY_ARN

      • S3_CLUSTERWIDE_ARCHIVING_ENDPOINT_BASE

      • S3_CLUSTERWIDE_ARCHIVING_WORKERCOUNT (default is cores/4)

      • S3_CLUSTERWIDE_ARCHIVING_USE_HTTP_PROXY (default is false)

      • S3_CLUSTERWIDE_ARCHIVING_IBM_COMPAT (default is false)

      Most of these configuration variables work like they do for S3 Archiving, except that the region/bucket is selected here via configuration, and not dynamically by the end users, and also that the authentication is via explicit accesskey and secret, and not via IAM roles or any other means.

      The following dynamic configurations are added for this feature:

      • S3ArchivingClusterWideDisabled (defaults to false when not set) — allows temporarily pausing the archiving in case of issues triggered by, for example, the traffic this creates.

      • S3ArchivingClusterWideEndAt and S3ArchivingClusterWideStartFrom — timestamps in milliseconds of the "cut" that selects segment files and events in them to include. When these configuration variables are unset (which is the default) the effect is to not filter by time.

      • S3ArchivingClusterWideRegexForRepoName (defaults to not match if not set) — the repository name regex must be set in order to enable the feature. When set, all repositories that have a name that matches the regex (unanchored) will be archived using the cluster wide configuration from this variable.

  • Ingestion

    • On the Code page accessible from the Parsers menu when writing a new parser, the following validation rules have been added globally:

      • Arrays must be contiguous and must have a field with index 0. For instance, myArray[0] := "some value"

      • Fields that are prefixed with # must be configured to be tagged (to avoid falsely tagged fields).

      An error is displayed on the parser Code page if the rules above are violated. This error will not appear during actual parsing.

      For more information, see Creating a New Parser.

    • To avoid exporting redundant fields in the parsers, LogScale will now omit YAML fields with a null value when exporting YAML templates — even when such fields are contained inside a list. Omitting fields with a null value previously only happened for fields outside a list.

  • Log Collector

    • RemoteUpdate version dialog has been improved, with the ability to cancel pending and scheduled updates.

  • Functions

    • Matching on multiple rows with the match() query function is now supported. This functionality allows match() to emit multiple events, one for each matching row. The nrows parameter is used to specify the maximum number of rows to match on.

      For more information, see match().

    • The new query function text:contains() is introduced. The function tests if a specific substring is present within a given string. It takes two arguments: string and substring, both of which can be provided as plain text, field values, or results of an expression.

      For more information, see text:contains().

    • The match() function now supports matching on multiple pairs of fields and columns.

      For more information, see match().

    • The new query function array:append() is introduced, used to append one or more values to an existing array, or to create a new array.

      For more information, see array:append().

Fixed in this release

  • Falcon Data Replicator

    • Testing new FDR feeds using s3 aliasing would fail for valid credentials. This issue has now been fixed.

  • UI Changes

    • The Query Monitor page would show queries running on @ingesttimestamp as running on a search interval over all time. This wrong behavior has been fixed to show the correct search interval.

    • The dropdown menu for selecting fields used when exporting data to a CSV file was hidden behind the Export to file dialog. This issue has now been fixed.

    • When clicking to sort the Sessions based on Last active, the sorting was wrongly based on Login time instead. This issue has now been fixed.

    • The event histogram would not adhere to the timezone selected for the query.

    • When managing sessions within an organization, it was not possible to sort active sessions by the Last active timestamp column. This issue has now been fixed.

    • In the Export to File dialog, when using the keyboard to switch between options, a different item than the one selected was highlighted. This issue has now been fixed.

    • It was not possible to sort by columns other than ID in the Cluster nodes table under the Operations UI menu. This issue has now been fixed.

    • A long list of large queries would break the queries' list appearing under the Recent tab by not being updatable. The limit to recent queries has now been set to 30.

      For more information, see Recalling Queries.

    • Fixing a visualization issue where the values in a multi-select combo box could overlap with the number of selected items.

    • The dialog to quickly switch to another repository would open when pressing the undo hotkey on Windows machines. This wrong behavior has now been fixed.

    • A race condition in LogScale Multi-Cluster Search has been fixed: a done query with an incomplete result could be overwritten, causing the query to never complete.

    • The Organizations overview page has been fixed as the Volume column width within a specific organization could not be adjusted.

    • The display of Lookup Files metadata in the file editor for very long user names has now been fixed.

    • The settings used to disable automatic searching would not be respected when creating a new alert. This issue has now been fixed.

    • When Creating a File, saving an invalid .csv file was possible in the file editor. This wrong behavior has now been fixed.

    • The Export to file dialog used when Exporting Data has been fixed as CSV fields input would in some cases not be populated with all fields.

  • Automation and Alerts

    • The read-only alert page would wrongly report that actions were being throttled when a filter alert had disabled throttling. This issue has now been fixed.

    • Actions would show up as scheduled searches and vice versa when viewing the contents of a package. This issue has now been fixed.

    • Fixed an issue where queries that were failing would never complete. This could cause Alerts and Scheduled Searches to hang.

    • Scheduled Searches would not always log if runs were skipped due to being behind. This issue has been fixed now.

  • Storage

    • Throttling for bucket uploads/downloads has been fixed as it could cause unintentionally high number of concurrent uploads or downloads, to the point of exceeding the pool of connections.

    • Digest threads could fail to start digesting if global is very large, and if writing to global is slow. This issue has now been fixed.

    • Notifying to Global Database about file changes could be slow. This issue has now been fixed.

    • Segments could be considered under-replicated for a long time leading to events being retained in Kafka for extended periods. This wrong behavior has now been fixed.

    • Throttling for bucket uploads/downloads could cause unintentionally harsh throttling of downloads in favor of running more uploads concurrently. This issue has now been fixed.

    • The throttling for segment rebalancing has been reworked, which should help rebalancing keep up without overwhelming the cluster.

  • GraphQL API

    • The background processing underlying the redactEvents() mutation would fail if the filter included tags. This error has now been fixed.

    • The getFileContent() GraphQL endpoint will now return an UploadedFileSnapshot! datatype with the field totalLinesCount: 0 when a file has no matches for a given filter string. Previously it would return the total number of lines in the file.

  • API

  • Configuration

    • Make a value of 1 for BucketStorageUploadInfrequentThresholdDays dynamic configuration result in all uploads to bucket being subject to "S3 Intelligent-Tiering". Some installs want this as they apply versioning to their bucket, so even though the life span as a non-deleted object is short, the actual data remains for much longer in the bucket, and then tiering all objects saves on cost of storage for them. Objects below 128KB are never tiered in any case.

  • Dashboards and Widgets

    • Arguments for parameters no longer used in a deleted query could be submitted anyway when invoking a saved query that uses the same arguments, thus generating an error. This issue has now been fixed.

    • The Table widget has been fixed due to its header appearing transparent.

  • Ingestion

    • Event Forwarding would fail silently if an error occurred while executing the query. This issue has now been fixed.

    • A queryToRead field has been added to the filesUsed property of queryResult to read the data from a file used in a query.

      For more information, see Polling a Query Job.

    • A wrong order of the output events for parsers have been fixed — the output now returns the correct event order.

    • Event Forwarding using match() or lookup() with a missing file would continue to fail after the file was uploaded. This issue has now been fixed.

    • Cache files, used by query functions such as match() and readFile(), are now written to disk for up to 24 hours after use. This can improve the time it takes for a query to start significantly, however, it naturally takes op disk space.

      A fraction of the disk used can be controlled using the configuration variables TABLE_CACHE_MAX_STORAGE_FRACTION and TABLE_CACHE_MAX_STORAGE_FRACTION_FOR_INGEST_AND_HTTP_ONLY.

    • When shutting down a node, the process that load files used by a parser would be stopped before the parser itself. This could lead to ingested events not being parsed. This issue has now been fixed.

  • Log Collector

    • #repo.cid tag was missing from live query results when DerivedCidTag feature flag was enabled. This issue has now been fixed.

    • Queries that were nested too deeply would crash LogScale nodes. This issue has now been fixed.

  • Functions

    • parseXml() would sometimes only partially extract text elements when the text contained newline characters. This issue has now been fixed.

    • Parsing the empty string as a number could lead to errors causing the query to fail (in formatTime() function, for example). This issue has now been fixed.

    • The query backtracking limit would wrongly apply to the total number of events, rather than how many times individual events are passed through the query pipeline. This issue has now been fixed.

    • Long running queries using window() could end up never completing. This issue has now been fixed.

    • writeJson() would write invalid JSON by not correctly quoting numbers starting with unary plus or ending with a trailing . (dot).

Known Issues

  • Queries

    • A known issue in the implementation of the match() function when using cidr option in the mode parameter, could cause a reduction in performance for the query, and block other queries from executing.

Improvement

  • UI Changes

    • The performance of the query editor has been improved, especially when working with large query results.

  • Automation and Alerts

    • The log field previouslyPlannedForExecutionAt has been renamed to earliestSkippedPlannedExecution when skipping scheduled search executions.

    • The field useProxyOption has been added to Webhooks action templates to be consistent with the other action templates.

    • The severity of a number of alert and scheduled search logs has been changed to better reflect the severity for users.

  • Storage

    • The global topic throughput has been improved for particular updates to segments in datasources with many segments.

      For more information, see Global Database.

    • Let segment merge span vary by +/- 10% of the configured value to avoid all segment targets switching to a new merge targets at the same point in time.

  • Ingestion

    • The input validation on Split by AWS records preprocessing when Set up a New Ingest Feed has been simplified: it will still validate that the incoming file is a single JSON object (and not, for example, multiple newline-delimited JSON objectx), but the object may or may not contain a Records array. This resolves an ingest feed issue for CloudTrail with log file integrity enabled. In such cases, the emitted digest files (that does not have the Records array) would halt the ingest feed. These digest files are now ignored.

      For more background information, see this related release note.

    • The Split by AWS records preprocessing when Set up a New Ingest Feed now requires the Records array. This better protects against a situation where mistakenly using this preprocessing step with non-AWS records would interpret the files as empty batches of events, leading notifications in SQS to be deleted without ingesting any events.