Falcon LogScale 1.131.0 GA (2024-03-26)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.131.0 | GA | 2024-03-26 | Cloud | 2025-04-30 | No | 1.106 | No |
Available for download two days after release.
Bug fixes and updates.
Deprecation
Items that have been deprecated and may be removed in a future release.
The assetType GraphQL field on
Alert
,Dashboard
,Parser
,SavedQuery
andViewInteraction
datatypes has been deprecated and will be removed in version 1.136 of LogScale.The
any
argument to thetype
parameter ofsort()
andtable()
has been deprecated and will be removed in version 1.142.Warnings prompts will be shown in queries that fall into either of these two cases:
If you are explicitly supplying an
any
argument, please either simply remove both the parameter and the argument, for example changesort(..., type=any)
tosort(...)
or supply the argument fortype
that corresponds to your data.If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of
type
parameter tohex
e.g.sort(..., type=hex)
.In all other cases, no action is needed.
The new default value for
sort()
andtable()
will benumber
. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument fortype
.In the GraphQL API, the
ChangeTriggersAndAction
enum value for both thePermission
andViewAction
enum is now deprecated and will be removed in version 1.136 of LogScale.We are deprecating the
humio/kafka
andhumio/zookeeper
Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.Better alternatives are available going forward. We recommend the following:
If you still require
humio/kafka
orhumio/zookeeper
for needs that cannot be covered by these alternatives, please contact Support and share your concerns.The following GraphQL queries and mutations for interacting with parsers are deprecated and scheduled for removal in version 1.142.
The deprecated createParser mutation is replaced by createParserV2() . The differences between the old and new mutation are:
testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.
fieldsToBeRemovedBeforeParsing can now be specified as part of the parser creation.
force field is renamed to allowOverwritingExistingParser.
sourceCode field is renamed to script.
tagFields field is renamed to fieldsToTag.
languageVersion is no longer an enum, but a LanguageVersionInputType instead.
The mutation returns a
Parser
, instead of aParser
wrapped in an object.The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.
The deprecated removeParser mutation is replaced by deleteParser. The difference between the old and new mutation is:
The mutation returns boolean to represent success or failure, instead of a
Parser
wrapped in an object.The deprecated testParser mutation is replaced by testParserV2() . The differences between the old and new mutation are:
The test cases are now structured types, instead of just being strings. To emulate the old API, take the test string and put it in the
ParserTestEventInput
inside theParserTestCaseInput
, and they will behave the same as before.The new test cases can contain assertions about the contents of the output.
The mutation output is significantly different from before, as it provides more detailed information on how a test case has failed.
The mutation now accepts both a language version and list of fields to be removed before parsing.
The parserScript field is renamed to script.
The tagFields field is renamed to fieldsToTag.
The deprecated updateParser mutation is replaced by updateParserV2() where more extensive test cases can be set. Continuing to use the previous API may result in test information on parsers being lost. To ensure information is not unintentionally erased, please migrate away from the deprecated APIs for both reading and updating parser test cases and use updateParserV2() instead. The differences between the previous and the new mutation are:
testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the
ParserTestEventInput
inside theParserTestCaseInput
, and they will behave the same as before.sourceCode field, used to updating the parser script, is changed to the script field, which takes a
UpdateParserScriptInput
object. This updates the parser script and the language version together.tagFields field is renamed to fieldsToTag.
The languageVersion is located inside the
UpdateParserScriptInput
object, and is no longer an enum, but a LanguageVersionInputType instead.The repositoryName and id fields are now correctly marked as mandatory in the schema. Previously this wasn't the case, even though the mutation would fail without them.
The mutation returns a
Parser
, instead of aParser
wrapped in an object.The old mutation had a bug where it would overwrite the languageVersion with a default value in some cases, which is fixed in the new one.
The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.
On the
Parser
type:
testData field is deprecated and replaced by testCases.
sourceCode field is deprecated and replaced by script.
tagFields field is deprecated and replaced by fieldsToTag.
For more information, see
Parser
,DeleteParserInput
,LanguageVersionInputType
, createParserV2() , testParserV2() , updateParserV2() .In the GraphQL API, the name argument to the parser field on the
Repository
datatype has been deprecated and will be removed in version 1.136 of LogScale.
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Storage
We've removed a throttling behavior that prevented background merges of mini-segments from running when digest load is high. Such throttling can cause
global
in the LogScale cluster to grow over time if the digest load isn't transient, which is undesirable.Moving mini-segments to the digest leader in cases where it is not necessary is now avoided. This new behavior reduces
global
traffic on digest reassignment.Registering local segment files is skipped on nodes that are configured to not have storage via their node role.
When booting a node, wait until we've caught up to the top of
global
before publishing the start message. This should help avoid global publish timeouts on boot whenglobal
has a lot of traffic.
New features and improvements
UI Changes
The parser test window width can now be resized.
Other
The
metrics
endpoint for the scheduled report render node has been updated to output the Prometheus text based format.
Fixed in this release
UI Changes
Duplicate HTML escape has been removed to prevent recursive field references having double escaped formatting in emails.
Storage
We've fixed a rarely hit error in the query scheduler causing a
ClassCastException
forscala.runtime.Nothing.
.
Functions
join()
function has been fixed as warnings of the sub-query would not propagate to the main-query result.Serialization of very large query states would crash nodes by requesting an array larger than what the JVM can allocate. This issue has been fixed.
Early Access
Functions
The
readFile()
function no longer supports JSON files. The function is currently labeled as Early Access.
Improvement
Storage
Concurrency for segment merging is improved, thus avoiding some unnecessary and inefficient pauses in execution.
We've switched to running the
RetentionJob
in a separate thread fromDataSyncJob
. This should enable more consistent merging.The
RetentionJob
work is now divided among nodes such that there's no overlap. This reduces traffic inglobal
.An internal limit on use of off-heap memory has been adjusted to allow more threads to perform segment merging in parallel.
Functions
Some performance improvements have been made to the
join()
function, allowing it to skip blocks that do not contain the specified fields of the main and sub-query.