Falcon LogScale 1.131.1 LTS (2024-04-17)

Version?Type?Release Date?Availability?End of Support

Security

Updates

Upgrades

From?

Config.

Changes?
1.131.1LTS2024-04-17

Cloud

2025-04-30No1.106No
TAR ChecksumValue
MD54a9223ff7d628a52257783b70b084726
SHA13666c2ac1eea45e07ea9a89f0c16eafffebc1e01
SHA2565eb83a4ee2c9a8792f1ac1ec9ddad9282a5e9e98d523a77556762eded9fd50ad
SHA51286000582f6b4134f85943ae2385b0b17113f241f988864c9113f2df639f4a2f97a6eba69edb305ec57e2e0db53578a79fb7f54aa15b9acd909092d8cc88f1438
Docker ImageIncluded JDKSHA256 Checksum
humio21adcf2fea3d8f9c10b764a73577959eeb5c58cdb2955e69846b26effc5758e0b9
humio-core212985c7ec6bde2f3c8904f71d238e7fdd70547c9d71488aea997acb89cf2d15ec
kafka21262c7e74062a32cecee9119836752ee6310662d570f80926e7dd36dcb785d380
zookeeper21b9b0349704cc996701c65cf713c1584c0b5db7f70cb00d53bf1051c50e0e99ab

Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.131.1/server-1.131.1.tar.gz

Bug fixes and updates.

Removed

Items that have been removed as of this release.

GraphQL API

  • The enabledFeatures() query has been removed from GraphQL schema. Use featureFlags() query instead.

Deprecation

Items that have been deprecated and may be removed in a future release.

  • The assetType GraphQL field on Alert, Dashboard, Parser, SavedQuery and ViewInteraction datatypes has been deprecated and will be removed in version 1.136 of LogScale.

  • The any argument to the type parameter of sort() and table() has been deprecated and will be removed in version 1.142.

    Warnings prompts will be shown in queries that fall into either of these two cases:

    • If you are explicitly supplying an any argument, please either simply remove both the parameter and the argument, for example change sort(..., type=any) to sort(...) or supply the argument for type that corresponds to your data.

    • If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of type parameter to hex e.g. sort(..., type=hex).

    In all other cases, no action is needed.

    The new default value for sort() and table() will be number. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument for type.

  • In the GraphQL API, the ChangeTriggersAndAction enum value for both the Permission and ViewAction enum is now deprecated and will be removed in version 1.136 of LogScale.

  • We are deprecating the humio/kafka and humio/zookeeper Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.

    Better alternatives are available going forward. We recommend the following:

    • If your cluster is deployed on Kubernetes: STRIMZI

    • If your cluster is deployed to AWS: MSK

    If you still require humio/kafka or humio/zookeeper for needs that cannot be covered by these alternatives, please contact Support and share your concerns.

  • The following GraphQL queries and mutations for interacting with parsers are deprecated and scheduled for removal in version 1.142.

    • The deprecated createParser mutation is replaced by createParserV2() . The differences between the old and new mutation are:

      • testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • fieldsToBeRemovedBeforeParsing can now be specified as part of the parser creation.

      • force field is renamed to allowOverwritingExistingParser.

      • sourceCode field is renamed to script.

      • tagFields field is renamed to fieldsToTag.

      • languageVersion is no longer an enum, but a LanguageVersionInputType instead.

      • The mutation returns a Parser, instead of a Parser wrapped in an object.

      • The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.

    • The deprecated removeParser mutation is replaced by deleteParser. The difference between the old and new mutation is:

      • The mutation returns boolean to represent success or failure, instead of a Parser wrapped in an object.

    • The deprecated testParser mutation is replaced by testParserV2() . The differences between the old and new mutation are:

      • The test cases are now structured types, instead of just being strings. To emulate the old API, take the test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • The new test cases can contain assertions about the contents of the output.

      • The mutation output is significantly different from before, as it provides more detailed information on how a test case has failed.

      • The mutation now accepts both a language version and list of fields to be removed before parsing.

      • The parserScript field is renamed to script.

      • The tagFields field is renamed to fieldsToTag.

    • The deprecated updateParser mutation is replaced by updateParserV2() where more extensive test cases can be set. Continuing to use the previous API may result in test information on parsers being lost. To ensure information is not unintentionally erased, please migrate away from the deprecated APIs for both reading and updating parser test cases and use updateParserV2() instead. The differences between the previous and the new mutation are:

      • testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.

      • sourceCode field, used to updating the parser script, is changed to the script field, which takes a UpdateParserScriptInput object. This updates the parser script and the language version together.

      • tagFields field is renamed to fieldsToTag.

      • The languageVersion is located inside the UpdateParserScriptInput object, and is no longer an enum, but a LanguageVersionInputType instead.

      • The repositoryName and id fields are now correctly marked as mandatory in the schema. Previously this wasn't the case, even though the mutation would fail without them.

      • The mutation returns a Parser, instead of a Parser wrapped in an object.

      • The old mutation had a bug where it would overwrite the languageVersion with a default value in some cases, which is fixed in the new one.

      • The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.

    On the Parser type:

    • testData field is deprecated and replaced by testCases.

    • sourceCode field is deprecated and replaced by script.

    • tagFields field is deprecated and replaced by fieldsToTag.

    For more information, see Parser , DeleteParserInput , LanguageVersionInputType , createParserV2() , testParserV2() , updateParserV2() .

  • In the GraphQL API, the name argument to the parser field on the Repository datatype has been deprecated and will be removed in version 1.136 of LogScale.

Behavior Changes

Scripts or environment which make use of these tools should be checked and updated for the new configuration:

  • Security

    • DNS caches are now invalidated after 60 seconds instead of never. To override this behavior, set the security policy networkaddress.cache.ttl in the security manager of the JRE (see Java Networking Properties).

  • Ingestion

    • It is no longer possible to delete a parser that is being used in an ingest feed. The parser must first be removed from the ingest feed.

      For more information, see Delete an Ingest Feed.

New features and improvements

  • Upgrades

    • The minimum version required to upgrade from has been raised to 1.106, in order to allow removing some workarounds for compatibility with old versions.

  • Security

    • Added support for authorizing with an external JWT from an IdP setup in our cloud environment.

    • The audience for dynamic OIDC IdPs in our cloud environments are now logscale-$orgId, where $orgId is the ID of your organization.

    • Added support for Oktas federated IdP OIDC extension to identity providers setup in cloud.

  • Automation and Alerts

    • Throttling and field-based throttling have been introduced as optional functionalities in Filter Alerts. The minimum throttling period is 1 minute.

    • The customizable trigger limit for Filter Alerts is removed. The trigger limit is now automatically determined based on the associated actions. If one or more email actions are associated, the trigger limit will be 15, otherwise, the trigger limit will be 100. Any existing customizable trigger limit of 1 will be treated as a throttling period of 1 minute, all other custom trigger limits will be ignored. This is a non-backwards compatible change to the GraphQL APIs for Filter Alerts, so any automation for these alerts must be updated.

  • GraphQL API

  • Configuration

    • The new dynamic configuration MaxOpenSegmentsOnWorker is implemented to control hard cap on open segment files for the scheduler. The scheduler should in most cases not reach this limit and it only acts as a backstop. Therefore, we recommend that administrators do not modify this setting unless advised to do so by CrowdStrike Support.

    • Authorization attempted via JWT tokens will now only try to grab user information from the user info endpoint if the scope in the access token contains any of the following: profile, email, openid. If no such scope is located in the token, LogScale will try to extract the username from the token and no other user details will be added. We will extract the scope claim based on the new environment variable OIDC_SCOPE_CLAIM, whose default is scope.

  • Ingestion

  • Queries

    • Queries are now allowed to be queued for start by the query coordinator for a maximum of 10 minutes.

      For more information, see Query Coordination.

  • Functions

    • The parseTimestamp() function is now able to parse timestamps with nanosecond precision.

    • The setField() query function is introduced. It takes two expressions, target and value and sets the field named by the result of the target expression to the result of the value expression. This function can be used to manipulate fields whose names are not statically known, but computed at runtime.

      For more information, see setField().

    • The getField() query function is introduced. It takes an expression, source, and sets the field defined by as to the result of the source expression. This function can be used to manipulate fields whose names are not statically known, but computed at runtime.

      For more information, see getField().

  • Other

    • The split by AWS record setting within ingest feeds will now accept numbers with leading zeros.

    • The missing-cluster-nodes metric will now track the nodes that are missing heartbeat data in addition to the nodes that have outdated heartbeat data. The new missing-cluster-nodes-stateful metric will track the registered nodes with outdated/missing heartbeat data that can write to global.

      For more information, see Node-Level Metrics.

    • The default IP filter for IdP and RDNS operations is now more restrictive: RDNS now defaults to denying lookups of reserved IP ranges and the filter has been updated to deny additional reserved IP ranges, as specified by the IANA. Self hosted administrators can specify their own filters by using the environment variables IP_FILTER_IDP, IP_FILTER_RDNS, and IP_FILTER_RDNS_SERVER respectively.

Fixed in this release

  • UI Changes

    • Field aliases could not be read on the sandbox repository. This issue is now fixed.

    • CSV files produced by LogScale for sending as attachments from email actions or uploaded through a LogScale Repository action could contain values where part of the text was duplicated. This would only happen for values that needed to be quoted. This issue is now fixed.

  • Automation and Alerts

    • Filter Alerts with field-based throttling could trigger on two events with the same value for the throttle field, if actions were slow. This issue is now fixed.

  • Ingestion

    • Fixed an issue that prevented the creation of Netflow/UDP protocol ingest listeners.

    • Cloning a parser from the UI would not clone the fields to be removed before parsing. This issue is now fixed.

  • Dashboards and Widgets

    • A dashboard with fixed shared time as default would not update correctly when selecting a new relative time. This issue is now fixed.

  • Queries

    • Multiple clients might trigger concurrent computation of the result step for a shared query. This issue has been fixed: now only one pending computation is allowed at a time.

  • Other

    • An issue with the IOC Configuration causing the local database to update too often has now been fixed.

  • Packages

    • Updating a package could fail, if one of the assets from the package had been deleted from the view where the package was installed. This issue has been fixed.

    • When attempting to upload a package disguised as a folder, some browsers would get a generic error messages. To fix this issue, only zip files are accepted now.

Early Access

Improvement

  • Storage

    • Moved the work of creating a global snapshot for upload to bucket storage from the thread coordinating segment uploads/downloads to a separate thread. This improves the reliability of uploading and download the global snapshot to/from bucket storage.

    • SegmentChangesJobTrigger has been disabled on nodes configured to not be able to store segments, thus saving some CPU time.

  • Configuration

    • The default value for AUTOSHARDING_MAX has changed from 128 to 1,024.

    • The default maximum limit for groupBy() has been increased from 200,000 to 1,000,000, meaning that this function can now be asked to collect up to a million groups. However, due to stability concerns it will not allow groupBy() to return the full million rows as a result when this function is the last aggregator: this is governed by the QueryResultRowCountLimit dynamic configuration, which remains unchanged. Therefore, this new limit is best utilized when groupBy() is used as a computational tool for creating groups that are then later aggressively filtered and/or aggregated down in size. If you experience resource strain or starvation on your cluster, you can reduce the maximum limit via the GroupMaxLimit dynamic configuration.

    • The default value for AUTOSHARDING_TRIGGER_DELAY_MS has changed from 1 hour to 4 hours.

    • The default memory limit for the query coordinator node has been increased from 400 MB to 4 GB. This new limit allows each query to use up to 1 GB of memory and thus produce more results, at the cost of taking up more resources. This in turn indirectly limits the amount of concurrent queries as the query scheduler may choose not to run a given query before existing queries have completed. If you experience resource strain or starvation on your cluster, you can reduce the memory limit by setting the QueryCoordinatorMemoryLimit dynamic configuration to 400,000,000.

  • Functions

    • Live queries now restart and run with the updated version of a saved query when the saved query changes.

      For more information, see User Functions (Saved Searches).

    • Reduction of memory requirements when processing empty arrays in functions that accept them. This helps reduce the memory required to use these functions with empty arrays.

  • Other

    • Improved handling of segments being replaced due to either merging or event redaction, to address rare cases of event duplication when segments are replaced multiple times shortly after each other.