Falcon LogScale 1.149.0 GA (2024-07-30)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.149.0 | GA | 2024-07-30 | Cloud | 2025-09-30 | No | 1.112 | No |
Available for download two days after release.
Bug fixes and updates.
Removed
Items that have been removed as of this release.
Installation and Deployment
The previously deprecated
jar
distribution of LogScale (e.g.server-1.117.jar
) is no longer published starting from this version. For more information, see Falcon LogScale 1.130.0 GA (2024-03-19).The previously deprecated
humio/kafka
andhumio/zookeeper
Docker images are now removed and no longer published.
Deprecation
Items that have been deprecated and may be removed in a future release.
The
server.tar.gz
release artifact has been deprecated. Users should switch to theOS/architecture-specific server-linux_x64.tar.gz
orserver-alpine_x64.tar.gz
, which include bundled JDKs. Users installing a Docker image do not need to make any changes. With this change, LogScale will no longer support bringing your own JDK, we will bundle one with releases instead.We are making this change for the following reasons:
By bundling a JDK specifically for LogScale, we can customize the JDK to contain only the functionality needed by LogScale. This is a benefit from a security perspective, and also reduces the size of release artifacts.
Bundling the JDK ensures that the JDK version in use is one we've tested with, which makes it more likely a customer install will perform similar to our own internal setups.
By bundling the JDK, we will only need to support one JDK version. This means we can take advantage of enhanced JDK features sooner, such as specific performance improvements, which benefits everyone.
The last release where
server.tar.gz artifact
is included will be 1.154.0.The
HUMIO_JVM_ARGS
environment variable in the LogScale Launcher Script script will be removed in 1.154.0.The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.
The lastScheduledSearch field from the
ScheduledSearch
datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to theScheduledSearch
datatype to replace lastScheduledSearch.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Functions
Prior to LogScale v1.147, the
array:length()
function accepted a value in thearray
argument that did not contain brackets[ ]
so thatarray:length("field")
would always produce the result0
(since there was no field named field). The function has now been updated to properly throw an exception if given a non-array field name in thearray
argument. Therefore, the function now requires the given array name to have[ ]
brackets, since it only works on array fields.
Upgrades
Changes that may occur or be required during an upgrade.
Installation and Deployment
The bundled JDK is upgraded to 22.0.2.
Fixed in this release
UI Changes
When clicking to sort the Sessions based on Last active, the sorting was wrongly based on Login time instead. This issue has now been fixed.
Fixing a visualization issue where the values in a multi-select combo box could overlap with the number of selected items.
Configuration
Make a value of
1
forBucketStorageUploadInfrequentThresholdDays
dynamic configuration result in all uploads to bucket being subject to "S3 Intelligent-Tiering". Some installs want this as they apply versioning to their bucket, so even though the life span as a non-deleted object is short, the actual data remains for much longer in the bucket, and then tiering all objects saves on cost of storage for them. Objects below 128KB are never tiered in any case.