Falcon LogScale 1.148.0 Internal (2024-07-23)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.148.0 | Internal | 2024-07-23 | Internal Only | 2025-07-31 | No | 1.112 | No |
Available for download two days after release.
Internal-only release.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The
-XX:ActiveProcessorCount=n
command-line option will be ignored if set. Users that need to configure the core count manually should setCORES=n
environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.This change is scheduled for 1.148.0.
For more information, see Configuring Available CPU Cores.
Removed
Items that have been removed as of this release.
API
The following previously deprecated KAFKA API endpoints have been removed:
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment
GET
/api/v1/clusterconfig/kafka-queues/partition-assignment
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults
GET
/api/v1/clusterconfig/kafka-queues/partition-assignment/id
Deprecation
Items that have been deprecated and may be removed in a future release.
The
server.tar.gz
release artifact has been deprecated. Users should switch to theOS/architecture-specific server-linux_x64.tar.gz
orserver-alpine_x64.tar.gz
, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.We are making this change for the following reasons:
By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.
Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.
By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.
The last release where
server.tar.gz artifact
is included will be 1.154.0.We are deprecating the
humio/kafka
andhumio/zookeeper
Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.Better alternatives are available going forward. We recommend the following:
If you still require
humio/kafka
orhumio/zookeeper
for needs that cannot be covered by these alternatives, please contact Support and share your concerns.The
HUMIO_JVM_ARGS
environment variable in the LogScale Launcher Script script will be removed in 1.154.0.The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.
The lastScheduledSearch field from the
ScheduledSearch
datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to theScheduledSearch
datatype to replace lastScheduledSearch.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Ingestion
Reduced the waiting time for redactEvents background jobs to complete.
The background job will not complete until all mini-segments affected by the redaction have been merged into full segments. The job was pessimistically waiting for
MAX_HOURS_SEGMENT_OPEN
(30 days) before attempting the rewrite. This has been changed to wait forFLUSH_BLOCK_SECONDS
(15 minutes) before attempting the rewrite, this means, while some mini-segments may not be rewritten for 30 days, it is uncommon. If a rewrite is attempted and encounters mini-segments, it is postponed and retried later.For more information, see Redact Events API.
Functions
Prior to LogScale v1.147, the
array:length()
function accepted a value in thearray
argument that did not contain brackets[ ]
so thatarray:length("field")
would always produce the result0
(since there was no field named field). The function has now been updated to properly throw an exception if given a non-array field name in thearray
argument. Therefore, the function now requires the given array name to have[ ]
brackets, since it only works on array fields.
New features and improvements
UI Changes
The
Users
page has been redesigned so that the Repository and view roles are displayed in a right hand side panel which opens when a repository or view is selected. The repository and views roles panel shows the roles that give permissions to the user for the selected repository or view, together with groups that apply to them and the corresponding query prefixes.For more information, see Manage Users.
Storage
The size of the queue for segments being uploaded to bucket storage has been increased. This reduces how often a scan global for changes is needed.
For more information, see Bucket Storage.
Configuration
Adjusted launcher script handling of the
CORES
environment variable:If
CORES
is set, the launcher will now pass-XX:ActiveProcessorCount=$CORES
to the JVM. IfCORES
is not set, the launcher will pass-XX:ActiveProcessorCount
to the JVM with a value determined by the launcher. This ensures that the core count configured for LogScale is always same as the core count configured for internal JVM thread pools.-XX:ActiveProcessorCount
will be ignored if passed directly via other environment variables, such asHUMIO_OPTS
. Administrators currently configuring their clusters this way should remove-XX:ActiveProcessorCount
from their variables and setCORES
instead.
Fixed in this release
UI Changes
The dropdown menu for selecting fields used when exporting data to a CSV file was hidden behind the Export to file dialog. This issue has now been fixed.
Ingestion
A queryToRead field has been added to the filesUsed property of queryResult to read the data from a file used in a query.
For more information, see Polling a Query Job.
Cache files, used by query functions such as
match()
andreadFile()
, are now written to disk for up to 24 hours after use. This can improve the time it takes for a query to start significantly, however, it naturally takes op disk space.A fraction of the disk used can be controlled using the configuration variables
TABLE_CACHE_MAX_STORAGE_FRACTION
andTABLE_CACHE_MAX_STORAGE_FRACTION_FOR_INGEST_AND_HTTP_ONLY
.