Falcon LogScale 1.163.0 GA (2024-11-05)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.163.0 | GA | 2024-11-05 | Cloud | Next LTS | No | 1.112 | No |
Available for download two days after release.
Bug fixes and updates.
Removed
Items that have been removed as of this release.
GraphQL API
Removed the following deprecated fields from the
Cluster
GraphQL type:
ingestPartitionsWarnings
suggestedIngestPartitions
suggestedIngestPartitions
storagePartitions
storagePartitionsWarnings
suggestedStoragePartitions
Configuration
Removed the
UNSAFE_ALLOW_FEDERATED_CIDR
,UNSAFE_ALLOW_FEDERATED_MATCH
, andALLOW_MULTI_CLUSTER_TABLE_SYNCHRONIZATION
environment variables.
Deprecation
Items that have been deprecated and may be removed in a future release.
The lastScheduledSearch field from the
ScheduledSearch
datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to theScheduledSearch
datatype to replace lastScheduledSearch.
New features and improvements
Installation and Deployment
Bumped the lowest compatible version for
UNSAFE_RELAX_FEDERATED_PROTOCOL_VERSION_CHECK
to 1.163.0. Searching LogScale Multi-Cluster Search in clusters can only be used when all clusters are using 1.163 or above.
UI Changes
Additional tabs from files and/or tables now appear in the
Search
next to the Results tab, e.g. when the newdefineTable()
function is used in queries. The tab names are prefixed by\"Table: \"
to make it more clear that they refer to, e.g., ad-hoc tables.For more information, see Using Ad-hoc Tables.
GraphQL API
Added the permissionType field to the
Group
GraphQL type. This field identifies the level of permissions the group has (view, organization or system).Added the following mutations:
createViewPermissionsTokenV2
createSystemPermissionsTokenV2
createOrganizationPermissionsTokenV2
createPersonalUserTokenV2
These mutations extend the functionality of the previous versions (without the
V2
suffix) by returning additional information about the token such as the id, name, permissions, expiry and IP filters.
Ingestion
Query resources will now also account for reading segment files in addition to scanning files. This will enable better control of CPU resources between search and the data pipeline operations (ingest, digest, storage).
Queries
Ad-hoc tables feature is introduced for easier joins. Use the
defineTable()
function to define temporary lookup tables. Then, join them with the results of the primary query using thematch()
function. The feature offers several benefits:Intuitive approach that now allows for writing join-like queries in the order of execution
Step-by-step workflow to create complex, nested joins easily.
Workflow that is consistent to the model used when working with Lookup Files
Easy troubleshooting while building queries, using the
readFile()
functionExpanded join use cases, providing support for:
inner joins with
match(... strict=true)
left joins with
match(... strict=false)
right joins with
readFile() | match(... strict=false)
Join capabilities in LogScale Multi-Cluster Search environments (Self-Hosted users only)
The
LiveTables
feature flag now defaults to being enabled instead of disabled, so that the feature is now available by default.For more information, see Using Ad-hoc Tables.
Changed the internal submit endpoint such that the requests logs correct information on whether the request is internal or not.
Functions
The following query functions limits have now their minimum value set to
1
. In particular:The
bucket()
andtimeChart()
query functions now require that the value given as theirbucket
argument is at least1
. For example,bucket(buckets=0)
will produce an error.The
collect()
,hash()
,readFile()
,selfJoin()
,top()
andtranspose()
query functions now require theirlimit
argument to be at least1
. For example,top([aid], limit=0)
will produce an error.The
series()
query function now requires thememlimit
argument to be at least1
, if provided. For example,| series(collect=aid, memlimit=0)
will produce an error.
Fixed in this release
Automation and Alerts
Fixed an issue where the
Action
overview page would not load if it contained a large number of actions.
Storage
Segments were not being fetched on an owner node. This issue could lead to temporary under-replication and keeping events in Kafka.
Resolved a defect that could lead to corrupted JSON messages on the internal Kafka queue.
Ingestion
An error is no longer returned when running parser tests without test cases.
Queries
Fixed an issue which could cause live query results from some workers being temporarily represented in the final result twice. The situation was transient and could only occur during digester changes.
Fixed an issue where a query would fail to start in some cases when the query cache was available. The user would see the error Recent events overlap span excluded from query using historicStartMin.