Falcon LogScale 1.136.2 LTS (2024-06-12)
Version? | Type? | Release Date? | Availability? | End of Support | Security Updates | Upgrades From? | Config. Changes? |
---|---|---|---|---|---|---|---|
1.136.2 | LTS | 2024-06-12 | Cloud | 2025-05-31 | No | 1.112 | No |
Hide file hashes
TAR Checksum | Value |
---|---|
MD5 | e9ff17d2c3f763bbe282fc4055aa3ea4 |
SHA1 | 6d215f73a3f0794f5d25293dab541bb2172d525c |
SHA256 | 6682216a929202b826c7a3b2bbf504cee03c1c2c0ead20e87324b92c7f3e84cf |
SHA512 | e0a5092cce05067186ef90bb001092b880107a3e53bace59cb83f0f56bd919381f5bfb7cc4794a382e4756faa34777996b477dc58074dbc61ee0a4e2d2b8b9d5 |
Docker Image | Included JDK | SHA256 Checksum |
---|---|---|
humio | 21 | 2d23d1ac912f2521ea2f6df58d1eb71809a37aab906e4af1833ad6515d71aa39 |
humio-core | 21 | 3045f568bf56c831aa2d068de4e21921ab1a58730a17f6b72d0f20fc34467315 |
kafka | 21 | 5e7bcafde7f97247d39436debf051eed74d392d3c8108814deba44c1e5201532 |
zookeeper | 21 | 41f009aaf13990b57fefbc7c53718251d66e805412e9cb7afb970c55bb189304 |
Download: https://repo.humio.com/repository/maven-releases/com/humio/server/1.136.2/server-1.136.2.tar.gz
These notes include entries from the following previous releases: 1.136.1
Bug fixes and updates.
Important
Due to a known memory issue in this release, customers are advised to upgrade to 1.137.0 or later.
Breaking Changes
The following items create a breaking change in the behavior, response or operation of this release.
Functions
The
limit
parameter has been added to therdns()
function. It is controlled by dynamic configurationsRdnsMaxLimit
andRdnsDefaultLimit
. This is a breaking change addition due to incidents caused by the large implicit limit used before.For more information, see
rdns()
.
Advance Warning
The following items are due to change in a future release.
Installation and Deployment
The LogScale Launcher Script script for starting LogScale will be modified to change the way CPU core usage can be configured. The
-XX:ActiveProcessorCount=n
command-line option will be ignored if set. Users that need to configure the core count manually should setCORES=n
environment variable instead. This will cause the launcher to configure both LogScale and the JVM properly.This change is scheduled for 1.148.0.
For more information, see Configuring Available CPU Cores.
Removed
Items that have been removed as of this release.
Storage
The full JDK has been removed from the Docker images, leaving only the bundled JDK that is part of LogScale release tarballs.
Deprecation
Items that have been deprecated and may be removed in a future release.
The
any
argument to thetype
parameter ofsort()
andtable()
has been deprecated and will be removed in version 1.142.Warnings prompts will be shown in queries that fall into either of these two cases:
If you are explicitly supplying an
any
argument, please either simply remove both the parameter and the argument, for example changesort(..., type=any)
tosort(...)
or supply the argument fortype
that corresponds to your data.If you are sorting hexadecimal values by their equivalent numerical values, please change the argument of
type
parameter tohex
e.g.sort(..., type=hex)
.In all other cases, no action is needed.
The new default value for
sort()
andtable()
will benumber
. Both functions will fall back to lexicographical ordering for values that cannot be understood as the provided argument fortype
.The following API endpoints are deprecated and marked for removal in 1.148.0:
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment
GET
/api/v1/clusterconfig/kafka-queues/partition-assignment
POST
/api/v1/clusterconfig/kafka-queues/partition-assignment/set-replication-defaults
The deprecated methods are used for viewing and changing the partition assignment in Kafka for the ingest queue. Administrators should use Kafka's own tools for editing partition assignments instead, such as the bin/kafka-reassign-partitions.sh and bin/kafka-topics.sh scripts that ship with the Kafka install.
We are deprecating the
humio/kafka
andhumio/zookeeper
Docker images due to low use. The planned final release for these images will be with LogScale 1.148.0.Better alternatives are available going forward. We recommend the following:
If you still require
humio/kafka
orhumio/zookeeper
for needs that cannot be covered by these alternatives, please contact Support and share your concerns.The
HUMIO_JVM_ARGS
environment variable in the LogScale Launcher Script script will be removed in 1.154.0.The variable existed for migration from older deployments where the launcher script was not available. The launcher script replaces the need for manually setting parameters in this variable, so the use of this variable is no longer required. Using the launcher script is now the recommended method of launching LogScale. For more details on the launcher script, see LogScale Launcher Script. Clusters that still set this configuration should migrate to the other variables described at Configuration.
The following GraphQL queries and mutations for interacting with parsers are deprecated and scheduled for removal in version 1.142.
The deprecated createParser mutation is replaced by createParserV2() . The differences between the old and new mutation are:
testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the ParserTestEventInput inside the ParserTestCaseInput, and they will behave the same as before.
fieldsToBeRemovedBeforeParsing can now be specified as part of the parser creation.
force field is renamed to allowOverwritingExistingParser.
sourceCode field is renamed to script.
tagFields field is renamed to fieldsToTag.
languageVersion is no longer an enum, but a LanguageVersionInputType instead.
The mutation returns a
Parser
, instead of aParser
wrapped in an object.The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.
The deprecated removeParser mutation is replaced by deleteParser. The difference between the old and new mutation is:
The mutation returns boolean to represent success or failure, instead of a
Parser
wrapped in an object.The deprecated testParser mutation is replaced by testParserV2() . The differences between the old and new mutation are:
The test cases are now structured types, instead of just being strings. To emulate the old API, take the test string and put it in the
ParserTestEventInput
inside theParserTestCaseInput
, and they will behave the same as before.The new test cases can contain assertions about the contents of the output.
The mutation output is significantly different from before, as it provides more detailed information on how a test case has failed.
The mutation now accepts both a language version and list of fields to be removed before parsing.
The parserScript field is renamed to script.
The tagFields field is renamed to fieldsToTag.
The deprecated updateParser mutation is replaced by updateParserV2() where more extensive test cases can be set. Continuing to use the previous API may result in test information on parsers being lost. To ensure information is not unintentionally erased, please migrate away from the deprecated APIs for both reading and updating parser test cases and use updateParserV2() instead. The differences between the previous and the new mutation are:
testData input field is replaced by testCases, which can contain more data than the old tests could. This includes adding assertions to the output of a test. These assertions are not displayed in the UI yet. To emulate the old API, you can take the old test string and put it in the
ParserTestEventInput
inside theParserTestCaseInput
, and they will behave the same as before.sourceCode field, used to updating the parser script, is changed to the script field, which takes a
UpdateParserScriptInput
object. This updates the parser script and the language version together.tagFields field is renamed to fieldsToTag.
The languageVersion is located inside the
UpdateParserScriptInput
object, and is no longer an enum, but a LanguageVersionInputType instead.The repositoryName and id fields are now correctly marked as mandatory in the schema. Previously this wasn't the case, even though the mutation would fail without them.
The mutation returns a
Parser
, instead of aParser
wrapped in an object.The old mutation had a bug where it would overwrite the languageVersion with a default value in some cases, which is fixed in the new one.
The mutation fails when a parser has more than 2,000 test cases, or the test input in a single test case exceeds 40,000 characters.
On the
Parser
type:
testData field is deprecated and replaced by testCases.
sourceCode field is deprecated and replaced by script.
tagFields field is deprecated and replaced by fieldsToTag.
For more information, see
Parser
,DeleteParserInput
,LanguageVersionInputType
, createParserV2() , testParserV2() , updateParserV2() .
Behavior Changes
Scripts or environment which make use of these tools should be checked and updated for the new configuration:
Queries
Hitting the query count quota no longer cancels existing queries, but only disallows starting new ones.
For more information, see Query Count.
Upgrades
Changes that may occur or be required during an upgrade.
Storage
Docker images have been upgraded to Java 22.
Added new deployment artifacts. The published tarballs (e.g.
server.tar.gz
) are now available with a bundled JDK. The platforms currently supported are linux_x64 for 64-bit Linux, and alpine_x64 for 64-bit Alpine Linux and other musl-based Linux distributions. The Docker images have been updated to use this bundled JDK internally. We encourage users to migrate to using the tarballs with bundled JDKs.
New features and improvements
Installation and Deployment
The LogScale Launcher Script now sets -XX:+UseTransparentHugePages as part of the mandatory flags. THP is already enabled for all processes on many Linux distributions by default. This flag enables THP on systems where processes must opt into THP via madvise. We strongly recommend enabling THP for LogScale.
UI Changes
Time zone data has been updated to IANA 2024a and has been trimmed to +/- 5 years from the release date of IANA 2024a.
The query editor now shows completions for known field values that have previously been observed in results. For instance,
#repo = m
may show completions for repositories starting withm
seen in previous results.Sign up to LogScale Community Edition is no longer available for new users. Links, pages and UI flows to access it have been removed.
The number of events in the current window has been added to Metric Types as window_count.
Automation and Alerts
Added logging when Alerts with Field-Based Throttling discard values and thus potentially trigger again before the throttle period expires.
For more information, see Field-Based Throttling.
The limit of 50 characters when naming a scheduled search is now removed.
GraphQL API
The querySearchDomains() query has been extended with the option to filter results by limit name as well as ordering results by limit name.
For more information, see querySearchDomains() .
Storage
The bucket transfer prioritization has been adjusted. When behind on both uploads and downloads, 75% of the
S3_STORAGE_CONCURRENCY
capacity is reserved for uploads, and 25% for downloads, rather than using all slots for downloads.We reverted a change introduced in 1.131.0 intended to cause fewer mini-segments to move in the cluster when digest reassignment occurs. The change could cause mini-segments to not be balanced across cluster nodes in the expected way.
Configuration
The following configuration parameters have been introduced:
The amount of global meta data required for retention spans of over 30 days has been reduced. The amount of global meta data required in clusters with high number of active datasources has also been reduced, as well as the global size of mini segments, by combining them into larger mini segments.
Pre-merging mini segments now reduces the number of segment files on disk (and in bucket) and reduces the amount of meta data for segment targets in progress. This allows getting larger target segment files and reduces the amount of "undersized" merging of "completed" segments. It also allows a smaller flush interval for mini segments without incurring in a larger number of mini segments.
This feature is only supported from v1.112.0. To safely enable it by default, we are now raising to v1.112.0 the minimum version to upgrade from, to disallow rollback to versions older than this version.
The feature is on by default. It can be disabled using the feature flag PreMergeMiniSegments. Disabling the feature stops future merges of mini segments into larger mini segment files, but does not alter the defaults below, nor modify how already merged mini-segments behave.
For more information, see Global Database, Ingestion: Digest Phase.
The default values for the following configuration parameters have changed:
FLUSH_BLOCK_SECONDS = 900
(was 1,800)MAX_HOURS_SEGMENT_OPEN = 720
(was 24, maximum is now 24,000)
Dashboards and Widgets
The automatic rendering of URLs as links has been disabled for the
Table
widget. Only URLs appearing in queries with the markdown style e.g.[CrowdStrike](https://crowdstrike.com)
will be automatically rendered as links in theTable
widget columns. Content, including plain URLs e.g.https://crowdstrike.com
, can still be rendered as links, but this should now be explicitly configured using the Show as→ widget property.For more information, see Table Widget Properties.
Dashboard parameters have gotten the following updates:
The name of the parameter is on top of the input field, so more space is available for both parts.
A
button has been added to multi-value parameters so that all values can be removed in one click.The parameter configuration form has been moved to the side panel.
Multiple values can be added at once to a multi-value parameter by inputting a comma separated list of values, which can be used as individual values.
For more information, see Multi-value Parameters.
Ingestion
Ingest feed scheduling has been changed to be more gradual in ramping up concurrency and will also reduce concurrency in response to failures. This will make high-pressure failing ingest feeds fall back to periodic retries instead of constantly retrying.
For more information, see Ingest Data from AWS S3.
Parser test cases can now include assertions. This allows you to specify that you expect certain fields to have certain values in a test case after parsing, or that you expect certain fields to not be present at all. Note that the assertions are not exported as part of the YAML template yet.
For more information, see Writing a Parser.
Log Collector
Introducing Fleet Management Remote Updates allowing users to install the LogScale Collector via curl / PowerShell, and manage upgrades and downgrades centrally from Fleet Management.
For more information, see Managing Falcon Log Collector Versions - Instances, Manage Versions - Groups, Install Falcon Log Collector.
Queries
Queries are now allowed to be queued for start by the query coordinator for a maximum of 10 minutes.
For more information, see Query Coordination.
Functions
The optional
limit
parameter has been added to thereadFile()
function to limit the number of rows of the file returned.The
geography:distance()
function is now generally available. The default value for theas
parameter has been changed to_distance
.For more information, see
geography:distance()
.onDuplicate
parameter has been added tokvParse()
to specify how to handle duplicate fields.For Cloud customers: the maximum value of the
limit
parameter fortail()
andhead()
functions has been increased to20,000
.For Self-Hosted solutions: the maximum value of the
limit
parameter fortail()
andhead()
functions has been aligned with theStateRowLimit
dynamic configuration. This means that the upper value oflimit
is now adjustable for these two functions.The
readFile()
function will show a warning when the results are truncated due to reaching global result row limit. This behaviour was previously silent.
Other
New metrics
ingest-queue-write-offset
andingest-queue-read-offset
have been added, reporting the Kafka offsets of the most recently written and read events on the ingest queue.The
ConfigLoggerJob
now also logsdigestReplicationFactor
,segmentReplicationFactor
,minHostAlivePercentageToEnableClusterRebalancing
,allowUpdateDesiredDigesters
andallowRebalanceExistingSegments
.New metric
events-parsed
has been added, serving as an indicator for how many input events a parser has been applied to.
Fixed in this release
Security
Various OIDC caching issues have been fixed including ensuring refresh of the JWKS cache once per hour by default.
UI Changes
The formatting of @timestamp has been improved to make time-based visualizations fully compatible with time zones when selecting time zones other than the browser default.
The error Failed to fetch data for aliased fields would sometimes appear on the
Search
page of the sandbox repository. This issue has been fixed.Data statistics in the
Organizations
overview page could not be populated in some cases.Fixed an issue that prevented users from copying the query string from the flyout in the Recent / Saved queries panel.
Still existing Humio occurrences have been replaced with LogScale in a lot of places, primarily in GraphQL documentation and error messages.
Storage
Pending merges of segments would contend with the verification of segments being transferred between nodes/bucket. This resulted in spuriously long transfer times, due to queueing of the verification step for the segment file. This issue has now been fixed.
redactEvents
segment rewriting has been fixed for several issues that could cause either failure to complete the rewrite, or events to be missed in rare cases. Users should be aware that redaction jobs that were submitted prior to upgrading to a fixed version may fail to complete correctly, or may miss events. Therefore, you are encouraged to resubmit redactions you have recently submitted, to ensure the events are actually gone.
Dashboards and Widgets
A visualization issue has been fixed as the dropdown menu for saving a dashboard widget was showing a wrong title in dashboards not belonging to a package.
Parameters appearing between a string containing
\\
and any other string would not be correctly detected. This issue has been fixed.Other options than exporting to CSV file were not possible on the
Dashboard
page for a widget and on theSearch
page for a query result. This issue is now fixed.
Queries
Multiple clients might trigger concurrent computation of the result step for a shared query. This issue has been fixed: now only one pending computation is allowed at a time.
Functions
The error message when providing a non-existing query function in an anonymous query e.g.
bucket(function=[{_noFunction()}])
has been fixed.The
table()
function has been fixed as it would wrongly accept a limit of 0, causing serialisation to break between cluster nodes.
Other
A regression introduced in version 1.132 has been fixed, where a file name starting with
shared/
would be recognized as a shared file instead of a regular file. However, a shared file should be referred to using exactly/shared/
as a prefix.DNS lookup was blocked by heavy disk IO when using a HTTP proxy, causing timeouts. This issue has been fixed.
Packages
Uploading a package zip would fail on Windows devices. This issue has been fixed.
Known Issues
Other
An issue has been identified where a memory leak could cause a node to exhaust the available memory. Customers are advised to upgrade to 1.137.0 or higher.
Improvement
Installation and Deployment
An error log is displayed if the latency on global-events exceeds 150 seconds, to prevent nodes from crashing.
Storage
Removed some work from the thread scheduling bucket transfers that could be slightly expensive in cases where the cluster had fallen behind on uploads.
Configuration
Whenever a SAML or OIDC IdP is created or updated, any leading or trailing whitespace will be trimmed from its fields. This is to avoid configuration errors.