Falcon LogScale 1.165.0 GA (2024-11-19)

Version?Type?Release Date?Availability?End of SupportSecurity UpdatesUpgrades From?Downgrades To?Config. Changes?
1.165.0GA2024-11-19

Cloud

2025-12-31No1.112.01.157.0No

Available for download two days after release.

Bug fixes and updates.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The lastScheduledSearch field from the ScheduledSearch datatype is now deprecated and planned for removal in LogScale version 1.202. The new lastExecuted and lastTriggered fields have been added to the ScheduledSearch datatype to replace lastScheduledSearch.

Upgrades

Changes that may occur or be required during an upgrade.

  • Installation and Deployment

    • Once LogScale has been upgraded to 1.162.0 with the WriteNewSegmentFileFormat feature flag enabled, LogScale cannot be downgraded to a version lower than 1.157.0.

New features and improvements

  • Security

    • Users can now see and use saved queries without needing the CreateSavedQueries and the UpdateSavedQueries permissions.

    • Users can now see actions in restricted read-only mode when they have the ReadAccess permission on the repository or view.

  • User Interface

    • Users with the ReadAccess permission on the repository or view can now view scheduled reports in read-only mode.

    • Files grouped by package are now displayed back again on the Files page including the Package Name column, which was temporarily unavailable after the recent page overhaul.

  • GraphQL API

  • API

    • Implemented support for returning a result over 1GB in size on the /api/v1/globalsubset/clustervhost endpoint. There is now a limit on the size of 8GB of the returned result.

  • Configuration

  • Ingestion

    • Increased a timeout for loading new CSV files used in parsers to reduce the likelihood of having the parser fail.

    • Added logging when a parser fails to build and ingest defaults to ingesting without parsing. The log lines start with Failed compiling parser.

  • Functions

    • A new parameter trim has been added to the parseCsv() function to ignore whitespace before and after values. In particular, it allows quotes to appear after whitespace. This is a non-standard extension useful for parsing data created by sources that do not adhere to the CSV standard.

    • The following new functions have been added:

    • bitfield:extractFlags() can now handle unsigned 64 bit input. It can also handle larger integers, but only the lowest 64 bits will be extracted.

Fixed in this release

  • Security

    • OIDC authentication would fail if certain characters in the state variable were not properly URL-encoded when redirecting back to LogScale. This issue has been fixed.

  • GraphQL API

    • role.users() query has been fixed as it would return duplicate users in some cases.

  • Storage

    • Recently ingested data could be lost when the cluster has bucket storage enabled, USING_EPHEMERAL_DISKS is set to false, and a recently ingested segment only exists in bucket storage. This issue has now been fixed.

    • LogScale could spuriously log Found mini segment without replacedBy and a merge target that already exists errors when a repository is undeleted. This issue has been fixed.

  • Functions

    • In defineTable(), start and end parameters did not work correctly when the primary query's end time was a relative timestamp: the sub-query's time was relative to now, and it has now been fixed to be relative to the primary query's end time.

  • Other

    • Query result highlighting would crash cluster nodes when getting filter matches for some regexes. This issue has been fixed.

Known Issues

  • Ingestion

    • An issue has been identified where construction of parsers utilizing files may experience timeouts when the Ad-hoc tables feature is enabled. This issue potentially impacts clusters running versions 1.165 through 1.170.

      Mitigation: temporarily disable the ad-hoc tables feature on affected clusters.

      Solution: upgrade to version 1.171, where this issue has been resolved.

  • Functions

    • A known issue in the implementation of the defineTable() function means it is not possible to transfer generated tables larger than 128MB. The user receives an error if the generated table exceeds that size.