Falcon LogScale 1.127.0 GA (2024-02-27)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.127.0 | GA | 2024-02-27 | Cloud | 2025-04-30 | No | 1.70.0 | No |
Available for download two days after release.
Bug fixes and updates.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
We aim to stop publishing the
jar
distribution of LogScale (e.g.server-1.117.jar
) as of LogScale version 1.130.0.Users deploying via Docker images are not affected. Users deploying on bare metal should ensure they deploy the
tar
artifact, and not thejar
artifact.A migration guide for bare metal deployments is available at How-To: Migrating from server.jar to Launcher Startup.
We intend to drop support for Java 17, making Java 21 the minimum. We plan to make this change in March 2024.
Deprecation
Items that have been deprecated and may be removed in a future release.
The assetType GraphQL field on
Alert
,Dashboard
,Parser
,SavedQuery
andViewInteraction
datatypes has been deprecated and will be removed in version 1.136 of LogScale.The
any
argument to thetype
parameter ofsort()
andtable()
has been deprecated and will be removed in version 1.142.Warnings prompts will be shown in queries that fall into either of these two cases:
If you are explicitly supplying an
any
argument, please either simply remove both the parameter and the argument, for example changesort(..., type=any)
tosort(...)
or supply the argument fortype
that corresponds to your data.If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of
type
parameter tohex
e.g.sort(..., type=hex)
.In all other cases, no action is needed.
The new default value for
sort()
andtable()
will benumber
. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument fortype
.In the GraphQL API, the
ChangeTriggersAndAction
enum value for both thePermission
andViewAction
enum is now deprecated and will be removed in version 1.136 of LogScale.The
humio
Docker image is deprecated in favor ofhumio-core
.humio
is no longer considered suitable for production use, as it runs Kafka and Zookeeper on the same host as LogScale, which our deployment guidelines no longer recommend. The final release ofhumio
Docker image will be in version 1.130.0.The new
humio-single-node-demo
image is an all-in-one container suitable for quick and easy demonstration setups, but which is entirely unsupported for production use.For more information, see Installing Using Containers.
We are deprecating the
humio/kafka
andhumio/zookeeper
Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.Better alternatives are available going forward. We recommend the following:
If you still require
humio/kafka
orhumio/zookeeper
for needs that cannot be covered by these alternatives, please contact Support and share your concerns.In the GraphQL API, the name argument to the parser field on the
Repository
datatype has been deprecated and will be removed in version 1.136 of LogScale.
New features and improvements
Functions
The
setField()
query function is introduced. It takes two expressions,target
andvalue
and sets the field named by the result of thetarget
expression to the result of thevalue
expression. This function can be used to manipulate fields whose names are not statically known, but computed at runtime.For more information, see
setField()
.The
getField()
query function is introduced. It takes an expression,source
, and sets the field defined byas
to the result of thesource
expression. This function can be used to manipulate fields whose names are not statically known, but computed at runtime.For more information, see
getField()
.
Fixed in this release
Ingestion
Fixed an issue that prevented the creation of Netflow/UDP protocol ingest listeners.
Improvement
Configuration
The default maximum limit for
groupBy()
has been increased from 200,000 to 1,000,000, meaning that this function can now be asked to collect up to a million groups. However, due to stability concerns it will not allowgroupBy()
to return the full million rows as a result when this function is the last aggregator: this is governed by theQueryResultRowCountLimit
dynamic configuration, which remains unchanged. Therefore, this new limit is best utilized whengroupBy()
is used as a computational tool for creating groups that are then later aggressively filtered and/or aggregated down in size. If you experience resource strain or starvation on your cluster, you can reduce the maximum limit via theGroupMaxLimit
dynamic configuration.The default memory limit for the query coordinator node has been increased from 400 MB to 4 GB. This new limit allows each query to use up to 1 GB of memory and thus produce more results, at the cost of taking up more resources. This in turn indirectly limits the amount of concurrent queries as the query scheduler may choose not to run a given query before existing queries have completed. If you experience resource strain or starvation on your cluster, you can reduce the memory limit by setting the
QueryCoordinatorMemoryLimit
dynamic configuration to 400,000,000.
Functions
Live queries now restart and run with the updated version of a saved query when the saved query changes.
For more information, see User Functions (Saved Searches).