Falcon LogScale 1.163.0 GA (2024-11-05)

Version?Type?Release Date?Availability?End of SupportSecurity UpdatesUpgrades From?Downgrades To?Config. Changes?
1.163.0GA2024-11-05

Cloud

2025-12-31No1.112.01.157.0Yes

Hide file download links

Show file download links

Bug fixes and updates.

Removed

Items that have been removed as of this release.

GraphQL API

  • Removed the following deprecated fields from the Cluster GraphQL type:

    • ingestPartitionsWarnings

    • suggestedIngestPartitions

    • suggestedIngestPartitions

    • storagePartitions

    • storagePartitionsWarnings

    • suggestedStoragePartitions

Configuration

  • The UNSAFE_ALLOW_FEDERATED_CIDR, UNSAFE_ALLOW_FEDERATED_MATCH, and ALLOW_MULTI_CLUSTER_TABLE_SYNCHRONIZATION environment variables have been removed as they now react as if they are always enabled.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The QUERY_COORDINATOR environment variable is deprecated. To control whether a node should be allowed to be a query coordinator, use the query node task instead. Node tasks can be assigned and unassigned at runtime using the assignTasks() and unassignTasks() GraphQL mutations respectively, or controlled using the INITIAL_DISABLED_NODE_TASKS environment variable.

    For more information, see INITIAL_DISABLED_NODE_TASKS.

  • The lastScheduledSearch field from the ScheduledSearch datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to the ScheduledSearch datatype to replace lastScheduledSearch.

Upgrades

Changes that may occur or be required during an upgrade.

  • Installation and Deployment

    • Once LogScale has been upgraded to 1.162.0 with the WriteNewSegmentFileFormat feature flag enabled, LogScale cannot be downgraded to a version lower than 1.157.0.

New features and improvements

  • Installation and Deployment

  • GraphQL API

  • Ingestion

    • Query resources will now also account for reading segment files in addition to scanning files. This will enable better control of CPU resources between search and the data pipeline operations (ingest, digest, storage).

  • Queries

    • Ad-hoc tables feature is introduced for easier joins. Use the defineTable() function to define temporary lookup tables. Then, join them with the results of the primary query using the match() function. The feature offers several benefits:

      • Intuitive approach that now allows for writing join-like queries in the order of execution

      • Step-by-step workflow to create complex, nested joins easily.

      • Workflow that is consistent to the model used when working with Lookup Files

      • Easy troubleshooting while building queries, using the readFile()function

      • Expanded join use cases, providing support for:

        • inner joins with match(... strict=true)

        • left joins with match(... strict=false)

        • right joins with readFile() | match(... strict=false)

      • Join capabilities in LogScale Multi-Cluster Search environments (Self-Hosted users only)

      When match() or similar functions are used, additional tabs from the files and/or tables used in the primary query now appear in order in Search next to the Results tab. The tab names are prefixed by \"Table: \" to make it more clear what they refer to.

      For more information, see Using Ad-hoc Tables.

    • Changed the internal submit endpoint such that the requests logs correct information on whether the request is internal or not.

  • Functions

    • The following query functions limits have now their minimum value set to 1. In particular:

      • The bucket() and timeChart() query functions now require that the value given as their bucket argument is at least 1. For example, bucket(buckets=0) will produce an error.

      • The collect(),hash(), readFile(), selfJoin(), top() and transpose() query functions now require their limit argument to be at least 1. For example, top([aid], limit=0) will produce an error.

      • The series() query function now requires the memlimit argument to be at least 1, if provided. For example, | series(collect=aid, memlimit=0) will produce an error.

Fixed in this release

  • Automation and Triggers

    • Fixed an issue where the Action overview page would not load if it contained a large number of actions.

  • Storage

    • Segments were not being fetched on an owner node. This issue could lead to temporary under-replication and keeping events in Kafka.

    • Resolved a defect that could lead to corrupted JSON messages on the internal Kafka queue.

  • Ingestion

    • An error is no longer returned when running parser tests without test cases.

  • Queries

    • Fixed an issue which could cause live query results from some workers being temporarily represented in the final result twice. The situation was transient and could only occur during digester changes.

    • Fixed an issue where a query would fail to start in some cases when the query cache was available. The user would see the error Recent events overlap span excluded from query using historicStartMin.

Recent Package Updates

The following LogScale packages have been updated within the last month.

  • Package Changes

    • infoblox/nios has been updated to v1.2.0.

      • Deprecation notice:

        • The old parser syslog-utc is deprecated, and replaced by the new parser infoblox-nios. In this release, the two parsers are exactly alike, except for the name, but all future changes will only go into the new infoblox-nios parser. We recommend switching to the newer parser as soon as possible, to make for the smoothest upgrade. The old syslog-utc parser will be removed at some point in the future. In your data, the field #type contains the name of the parser, so any queries you may have that searches for this field need to accomodate this change.

      • It extends the support of syslog format.

      • Adds following fields mapped to CPS: dns.question.name, dns.question.class, client.domain, client.ip amd server.ip.

      For more information, see Package infoblox/nios Release Notes.

    • zscaler/private-access has been updated to v1.2.0.

      Parser renaming and Deprecation notice

      As part of our continuous efforts to simplify and improve parser performance, we consolidated all existing parsers in this package into a single unified zscaler-privateaccess parser. This means the following parsers:

      • zscaler-zpa-app-connector-status-json

      • zscaler-zpa-app-protection-json

      • zscaler-zpa-audit-json

      • zscaler-zpa-browser-access-json

      • zscaler-zpa-user-activity-json

      • zscaler-zpa-user-status-json

        are deprecated and all future changes will only go into the new zscaler-privateaccess parser. The new parser requires a change on the Zscaler side in the log format for Zscaler Private Access sources.

        Follow the steps outlined below for the migration process:

      • Create new ingest token and associate it with the new zscaler-privateaccess parser

      • In the ZPA administration console:

        • create a new log receiver and configure it with your LogScale Collector's IP address, TCP port, and TLS encryption details (if required)

        • Under the Log Stream tab, set the new log format for a log type which you want to send into LogScale

        • Configure LogScale Collector to receive ZPA logs with new format

        • Confirm that data with new format is successfully ingested into LogScale

        • Delete the ingest tokens for old parsers

        • Delete the configuration for old parsers in the LogCollector

        • Remove the configuration for the old format in the ZPA console

      Misc
      • Bumps the minimum LogScale version to 1.142 to support assertions in yaml files.

      • Improves the field extraction and performance.

      For more information, see Package zscaler/private-access Release Notes.